Monday, February 8, 2016

The feeling you get when a robot told you "I know who am I". Are robots being conscious of themselves?

A robot from New York passed a self-awareness test back in July last year. There were actually three of them that participated in the self-awareness test called the wise man challenge which goes as follows:

The king of a certain kingdom was looking for a new advisor and decided to call the three wisest men in the land and put them to the test. He placed a hat on each of their heads with the color of the hat being either white or blue. This of course means that they could see the hats of the other two men but not actually their own. The king also mentioned that at least one of them was wearing a blue hat and also, the competition will be fair to all three participants. After that, he declared that the first person to stand up and name the color of their hat becomes his new advisor.

This self-awareness test was adapted to the three robots in the following way; two of the robots were told that they were given a so called ‘dumbing pill’ which stops them from talking. Afterwards, all three robots were asked which robot was still able to speak.
At first, none of them knew the answer so they all said, ‘I don’t know’. But when one robot heard the noise it made, it said, ‘Sorry, I know now!’. This indicates that the robot realized that it could actually speak even though it was not really sure at first, hence passing the self-awareness test.

Of course this is a very basic test and does not really indicate that the robot is anywhere close to having what humans perceive as actual consciousness. But what this also indicates is that there is huge potential in Artificial Intelligence and if we create more sophisticated robots that pass even more complex self-awareness tests, then perhaps consciousness might just become a thing with robots. Whether or not we should be worried about that is also something that has been debated for decades now, and nobody seems to know the answer.
  

FPGA and SATA connectivity and compatibility

     iWave systems developed SATA Host Controller design targeted for integration with Altera’s Cyclone V SoC series FPGA devices to provide an industry-compliant SATA 1.5-Gbps and SATA 3.0-Gbps interface. Serial ATA (SATA) are computer bus standards that have the primary function of transferring data between the host CPU/FPGA to mass storage devices such as hard disk and SSD.

SATA HOST CONTROLLER INTERFACES

Figure 1: Block Diagram

The above block diagram shows the internal blocks of the SATA host controller IP. The SATA host controller include Link and Physical (PHY) Layer. The Link layer is responsible for taking data from the constructed frames, encoding or decoding each byte using 8b/10b, and inserting primitives (e.g. ALIGNp, SYNCp). The Physical layer is responsible for transmitting and receiving the encoded information as a serial data stream on the SATA interface. SATA protocol supports the out-of-band (OOB) signaling scheme. Out-of-band signaling is used for the following functions:

Establish communication between a host and drive to identify the type of drive used in the system
Identify the maximum operating data rate of the host and drive

An out-of-band signal is a tri-level signal that contains a pattern of idle and burst signals. Out-of-band signaling is used to identify the specific actions during conditions; such as receiving interface is inactive or in low power state mode. The out-of-band signals comprise of COMRESET, COMINIT/COMWAKE. Altera devices are designed/configured to operate with GXB Transceiver (Gigabit Transceiver Block) on Cyclone V SoC to generate and detect out-of-band sequences through the transmitter electrical idle and receiver signal detect features.

iWave has developed Hardware and Software solutions around the Altera's Cyclone V SX Series SoC based System On Module (SOM). The SOM is also compatible with latest Qseven specification version R2.0 supporting all the major high speed interfaces like SATA, Gigabit Ethernet, LVDS, multiple USB 2.0 host ports and many more.

For more information: mktg@iwavesystems.com

RoomAlive another way Microsoft uses to control people

        What is RoomAlive?
RoomAlive is capable of transforming any room
into a giant playing arena. It is an extension of
Microsoft’s previous IllumiRoom research project.
It uses and combines Kinect and projectors to
create a unique augmented reality experience in a
room. You can interact with the objects and play
it. It uses six Kinect depth cameras for depth
sensing.
How RoomAlive projects things?
The basic unit of RoomAlive is called a node. A
node consists of a projector, a Kinect camera and
a computer. The node is fitted on the ceiling and it
creates a 3D projection in the room. Microsoft
research has used around three such nodes to
cover an average sized living room.
How it engages the player?
RoomAlive tracks your position continuously in the
real time. Anytime you touch a surface, it responds
and acts like a touchscreen. This is possible
because the projectors are knowing your positions.
It uses parallax technology i.e. using 2D
projections to look like 3D projections. Microsoft
has also enabled a technology named Mano-a-
mano i.e. two people can enjoy a unique
perspective in the same room.
What games are available right now on RoomAlive?
Games developed for RoomAlive include Whack-A-
Mole. In this game, players have to use an infrared
gun to bust moles which appear all around the
room. Another game includes a robot fighting
game played with an ordinary Xbox controller.
A Microsoft Research spokesperson says :

“There’s still lots to explore with RoomAlive
as a gaming platform, we envision a future
where games can use physical objects as
part of the game.”

Cell phone towers ; A deadly weapon

      As if skeptics didn’t have enough to facepalm about, this recent study published in PLoSOne implies that it has identified a model in which anthropogenic (meaning, human-caused) RF emissions are enough to cause pain in amputees. It also claims to reconcile anecdata from “power lines give me a migraine” types with the overwhelming public and scientific consensus that anthropogenic RF EMFs don’t actually have adverse effects. But it fails hard at supporting its claims.

If you sift through the study, the actual hypothesis becomes clear, and it has little to do with the deeply misleading press release. Sleight-of-headline aside, the team claims that they’ve found a neuropathology that makes rats sensitive enough to RF emissions that they can feel pain from anthropogenic RF like what cell towers produce. The condition is called a neuroma: a usually-benign growth on a damaged nerve that’s already known to cause feelings of electric shocks. Neuromas are also known to contribute to phantom limb pain, which happens to amputees.

The paper says that rats with neuromas react to less than a watt per square meter of applied microwave-frequency energy. It goes on to hypothesize that this happens because neuromas have too many of a sensory receptor usually used to sense heat. Those things on their own are testable. And they’re the only things about this study that make any sense.

The study and its press release have some major problems that I invited the lead author, Dr. Mario Romero-Ortega, to discuss via email (he didn’t respond).

Above the fold, the press release rests on the personal experience of retired Maj. David Underwood, a veteran of the Iraq war whose injuries resulted in the amputation of his left arm — which he believes hurts when he drives past cell towers, or gets close to cell phones on roam. Buried below the fold in the press release, it adds that as a result of interactions with Maj. Underwood, one Dr. Mario Romero-Ortega conducted this study on rats and cell cultures in vitro. And because there exists anecdotal evidence, but no scientific consensus or human studies, Dr. Romero-Ortega opines that the results of this study in rats are “very likely” generalizable to humans.

Neuromas are known to cause neuropathic pain all on their own. The premise of this study is that the researchers injured rat sciatic nerves and gave them neuromas, and that those rats experienced pain in the presence of 915 MHz RF radiation. Why does it surprise us that rats who were already in pain from their neuromas acted like they were in pain? Why does the law of parsimony not take us back to the null hypothesis?

Besides, the numbers just don’t work. The numbers don’t even come close to working, even with extremely generous margins. To start with — the authors’ own data doesn’t actually follow the inverse square law — the RF field apparently increases with increasing distance from the source, and if they actually used the numbers they published for their model, it belies the whole idea of the RF emissions being the problem. Then, the RF dose they applied to the rat amounted to less than a third of what an iPhone 5 puts out, and not what you’d actually be getting at 40m from a cell tower broadcasting at 50W. And even assuming that the authors’ estimation of RF emissions strength over distance was correct, their own graph shows that they cherry-picked the high outlier in their data (the yellow dot among all the reds, below) to use for a representative value.

There’s a conflict of interest that’s important here, too. Dr. Romero-Ortega is a well-established neuroscientist and professor of neurology and plastic surgery — and he’s also the founder and Chief Scientific Officer for “Nerve Solutions Inc.,” a company which is currently “below the radar” because it’s having “market challenges” selling the devices he invented, including one he claims can relieve pain by limiting the formation of neuromas.


But in the RF paper, he claimed that he didn’t have any patents to disclose. Done like this, it really looks like he had a saleable idea, but realized ex post facto that he needed some science to support his marketing claims. It wouldn’t be the first time a scientist has taken real research, used it to lampshade pseudoscience, and co-opted it to turn a profit off the less scientifically literate. The university’s own press release goes on to say that their next step should be building a device people can use to block RF signals. Charging people in pain for Faraday cages to block RF signals? Either this is groundbreaking, game-changing original research that just hasn’t hit the mainstream yet, or it’s one slick infomercial removed from tinfoil hats.


Now let me be very clear. I am not trying to deride people who have unexplained problems, just because my thoughts on the etiology of the problems differs from theirs. Nor do I wish to disparage Maj. Underwood’s experiences or Dr. Romero-Ortega’s research. Amputees have experienced shocking pains since long before cell towers were conceived.


In medicine, the saying goes, pain is whatever the experiencing person says it is. Nobody’s saying amputees don’t have pain. It’s just that we are dancing on the edge of Ockham’s razor in trying to say it’s the cell towers causing pain. Extraordinary claims require extraordinary evidence, and a single conflicted study does not constitute extraordinary evidence. Science is supposed to be accessible to everybody. The only way we can keep it like that is by being scrupulously honest about what we can claim.

                   

FPGA cores getting cooler than you might hink

       As microprocessors have grown in size and complexity, it’s become increasingly difficult to increase performance without skyrocketing power consumption and heat. Intel’s CPU clock speeds have remained mostly flat for years, while AMD’s FX-9590 and its R9 Nano GPU both illustrate dramatic power consumption differences as clock speeds change. One of the principle barriers to increasing CPU clocks is that it’s extremely difficult to move heat out of the chip. New research into microfluidic cooling could help solve this problem, at least in some cases.

Microfluidic cooling has existed for years; we covered IBM’s Aquasar cooling system back in 2012, which uses microfluidic channels — tiny microchannels etched into a metal block — to cool the SuperMUC supercomputer. Now, a new research paper on the topic has described a method of cooling modern FPGAs by etching cooling channels directly into the silicon itself. Previous systems, like Aquasar, still relied on a metal transfer plate between the coolant flow and the CPU itself.

Here’s why that’s so significant. Modern microprocessors generate tremendous amounts of heat, but they don’t generate it evenly across the entire die. If you’re performing floating-point calculations using AVX2, it’ll be the FPU that heats up. If you’re performing integer calculations, or thrashing the cache subsystems, it generates more heat in the ALUs and L2/L3 caches, respectively. This creates localized hot spots on the die, and CPUs aren’t very good at spreading that heat out across the entire surface area of the chip. This is why Intel specifies lower turbo clocks if you’re performing AVX2-heavy calculations.

By etching channels directly on top of a 28nm Altera FPGA, the research team was able to bring cooling much closer to the CPU cores and eliminate the intervening gap that makes water-cooling less effective then it would otherwise be. According to the Georgia Institute of Technology, the research team focused on 28nm Altera FPGAs. After removing their existing heatsink and thermal paste, the group etched 100 micron silicon cylinders into the die, creating cooling passages. The entire system was then sealed using silicon and connected to water tubes.

“We believe we have eliminated one of the major barriers to building high-performance systems that are more compact and energy efficient,” said Muhannad Bakir, an associate professor and ON Semiconductor Junior Professor in the Georgia Tech School of Electrical and Computer Engineering. “We have eliminated the heat sink atop the silicon die by moving liquid cooling just a few hundred microns away from the transistors. We believe that reliably integrating microfluidic cooling directly on the silicon will be a disruptive technology for a new generation of electronics.”

Could such a system work for PCs?
The team claims that using these microfluidic channels with water at 20C cut the on-die temperature of their FPGA to just 24C, compared with 60C for an air-cooled design. That’s a significant achievement, particularly given the flow rate (147 milliliters per minute). Clearly this approach can yield huge dividends — but whether or not it could ever scale to consumer hardware is a very different question.

As the feature image shows, the connect points for the hardware look decidedly fragile and easily dislodged or broken. The amount of effort required to etch a design like this into an Intel or AMD CPU would be non-trivial, and the companies would have to completely change their approach to CPU heat shields and cooling technology. Still, technologies like this could find application in HPC clusters or any market where computing power is at an absolute premium. Removing that much additional heat from a CPU die would allow for substantially higher clocks, even with modern power consumption scaling.

Become immune to electric shocks to act like superhumans

       An Anti-Shock Implant, also known as an A-S Implant, is a small device that can be implanted in the back of a character's neck, making the character immune to stun attacks.[1] A-S implants must be installed at a hospital.[2][3]

An A-S Implant costs 2,000 Cr, including installation.

A robot may be equipped with a surge protector that is functionally identical to an A-S Implant. The cost of this device is 2,200.[4]

Notes & References

Source: Star Frontiers: Alpha Dawn
↑ Such as from Electric Shock Weapons and from Sonic Stunners.
↑ Or hospital-grade facility.
↑ Though an out-patient procedure, it requires a full day (10 hours) to install, for both pre-op and post-op preparations and recovery.
↑ Installed. A roboticist with access to a robcom kit may purchase the device for 2,000 Cr and install it herself for no additional cost.

Saturday, February 6, 2016

What does it feel like sharing our personality through Bluetooth ? : Can we now save our lifestyle for our children to inherit ?




   Till now, we were busy learning the art of transmission of files and data between computers and mobiles. But to my surprise, recently an eminent inventor claimed that our computers may very soon be capable of transmitting the complexities of human personalities.

Sebastian Thrun, the one who founded the Google X laboratory, the birthplace of technical wonders like Google Glass and Driverless Cars, believes that we can get to the point from where are able to outsource our own personal experiences entirely into a computer or possibly our very personality. He believes that the concept might seem impossible and unimaginable, but it can be turned into reality.
Interestingly, other upcoming inventions of Thrun are even more shocking and unimaginable. They include flying cars, computers that are implantable into the human body and medical treatments that will drastically curb unnatural deaths. This is truly a proof for the fact that all has not been yet invented, so it is very sure that there is much more for us to see in the years ahead.

However, a virtual reality pioneer, Jaron Lanier, who is known for his books on the philosophy of computers, doubts the thought that personalities can be shared via computers. He warned that Silicon Valley has put too much faith in the technological progress. He said that we are going to dive in troubles because defining our personality is no less than a challenge, as we go through lots of changes all along our lifetime.

If you find this story interesting, share your opinions with us through comments. For more tech updates and scientific wonders, keep reading fossBytes.

Can genetics studies recreate the Expensive DNA of Super Humans that can not feel pains ?

Short Bytes: Pharma companies are studying the mutations of real superhumans that include pain insensitivity and super dense bones, to create wonder drugs for the general public. The DNA of these super rare humans used for the research is very costly and worth billions.

Most of you would be familiar with Stan Lee’s superhumans – The Unbreakable, The Electro Man, The Hammer Head and numerous more. People who can withstand tremendous amount of electricity that could easily roast a normal guy, someone who feels absolutely no pain, or a guy who can run 50 marathons in 50 states, in 50 days without fatigue are the real-life superhumans.
These are the guys with peculiar irregularities in their DNA which is worth billions of dollars for the pharmaceutical companies. If the bio-medical scientists are able to understand and extract these aberrations to engineer new drugs, then it can be a life saver in the critical situations. Scientists have already come up new anti ageing pills that claim to increase your health span.

According to a report presented on Bloomberg, Steven Pete was born with the gift that most of us could only wish for. Pete has an incredibly high threshold of pain passed down to him as a combination of subtle mutations from his parents. He can not feel the burn of the fire, and can walk on glass pieces without feeling a thing.

Timothy Dreyer is another superhuman who has a bone structure similar to that of the Incredible Hulk. His bones are so dense that he could literally walk out of the car accidents. His condition is termed as sclerosteosis.

Pharma companies across the globe are studying these mutations and working on ways to create wonder drugs as we speak. Also for the medicine business, this is an opportunity to increase their worth to billions.

Also Read: Organic Computer: Researchers Make Internet By Connecting Your Brains

Currently, there are only about a hundred people having sclerosteosis and the research is going on to use this mutation against osteoporosis. Similarly, Steven Pete’s insensitivity to pain could provide a major breakthrough for the pain relief industry and which is estimated for a US$18 billion industry.

The researchers have already created a medication that is derived from the gene of another superhuman. It is a cholesterol-lowering drug that will soon be available in the US markets.

But, these heightened abilities and resistive powers also have some side-effects. Due to Pete’s superhuman strength of enduring pain, he once chewed his own tongue as a baby and as he grew up his left leg suffered permanent damage from his injuries, which unfortunately he never felt. Dreyer’s bone condition has led to hearing loss because of the huge cranial pressure.

Researchers are trying to take out the best from these mutations and utilize them without the side-effects. The superhumans out there would feel content as their genetic contribution will help the fellow humans facing critical condition.

Human brains now part of the Internet connections

Most of us agree that our brains work better than computers (and some don’t). So why not connect multiple brains to perform a task more proficiently? Recently, the researchers at Duke University proposed the working conecept of Brainets – a network made using multiple animal brains to exchange of information using brain-to-brain interfaces. They have connected brains of four rats to make a “Brainet” that finishes any task faster and more efficiently than an individual.

For the first time, multiple brains have been connected to perform some work. The research team used electrodes to connect the brains of rats and monkeys to carry out simple functions like moving a computer-generated arm. The team connected four rat brains using two sets of electrodes and sent an identical signal to the brain of each rat. The response time was then monitored on a computer. This connected arrangement was called an organic computer or, brainet.
Rats obviously need some training to perform a function again and again. So, each time a rats reacted with same response they were given water. After multiple tries, rats learnt how to coordinate the response. Surprisingly, the success rate was found as huge as 87 percent.

After rats, monkeys were used to perform tests and two monkeys were connected with electrodes and computer. After this, one monkey had to look at the computer screen that showed a picture of a ball and arm. The combined signals from brains of monkeys had to move the objects which they learnt eventually.

They sat that once the animals started behaving together, they did tests just the way computers store information for future and recall it. It was found that if the animals weren’t sleeping, this was possible. The Brainets performed at higher levels than a single animal in the tasks. Thus, researchers worked to create a superbrain by joining multiple brains.

Google X Founder: Downloading Our Personalities To Computers Possible Soon

The next step is to work on the possibilities of connecting human brain in order to do wonders in medical science by using someone else’s brain to treat a paralyzed patient. If human brains are connected, Organic Computers would surely open a whole era of research and invocation.

Source: Nature, Jordaneuro

Comment your views regarding this Organic Computer story and subscribe to fossBytes newsletter for free updates.

For more technology updates and interesting stories, follow fossBytes.

Human brains is now part of the Internet Cloud


Most of us agree that our brains work better than computers (and some don’t). So why not connect multiple brains to perform a task more proficiently? Recently, the researchers at Duke University proposed the working conecept of Brainets – a network made using multiple animal brains to exchange of information using brain-to-brain interfaces. They have connected brains of four rats to make a “Brainet” that finishes any task faster and more efficiently than an individual.

For the first time, multiple brains have been connected to perform some work. The research team used electrodes to connect the brains of rats and monkeys to carry out simple functions like moving a computer-generated arm. The team connected four rat brains using two sets of electrodes and sent an identical signal to the brain of each rat. The response time was then monitored on a computer. This connected arrangement was called an organic computer or, brainet.
Rats obviously need some training to perform a function again and again. So, each time a rats reacted with same response they were given water. After multiple tries, rats learnt how to coordinate the response. Surprisingly, the success rate was found as huge as 87 percent.

After rats, monkeys were used to perform tests and two monkeys were connected with electrodes and computer. After this, one monkey had to look at the computer screen that showed a picture of a ball and arm. The combined signals from brains of monkeys had to move the objects which they learnt eventually.

They sat that once the animals started behaving together, they did tests just the way computers store information for future and recall it. It was found that if the animals weren’t sleeping, this was possible. The Brainets performed at higher levels than a single animal in the tasks. Thus, researchers worked to create a superbrain by joining multiple brains.

Google X Founder: Downloading Our Personalities To Computers Possible Soon

The next step is to work on the possibilities of connecting human brain in order to do wonders in medical science by using someone else’s brain to treat a paralyzed patient. If human brains are connected, Organic Computers would surely open a whole era of research and invocation.

Source: Nature, Jordaneuro

Comment your views regarding this Organic Computer story and subscribe to fossBytes newsletter for free updates.

For more technology updates and interesting stories, follow fossBytes.

Friday, February 5, 2016

Flying wings coming faster than was expected

In December, US aircraft maker Northrop Grumman unveiled a revolutionary design for a future fighter aircraft that could, theoretically, fly over the war zones of the coming century.
Their concept looks more like a flying saucer than a fighter plane – it is what aviation experts call a ‘flying wing’, a design which ditches the traditional tail fin at the back. This design helps reduce the aircraft’s size, and creates a smoother shape – one less likely to bounce back radar signals being sent out to detect it.

Northrop Grumman's concept for a flying wing fighter has similarities to the Hortens' innovative design (Credit: Northrop Grumman)
It looks about as futuristic as fighter aircraft can get, but its genesis goes far further back than you think – to a truly groundbreaking jet fighter design built and flown in Nazi Germany in the dying days of World War Two.
That aircraft – the Horten Ho 229 – might be a footnote in aviation history, but it was so far ahead of its time that its aerodynamic secrets are still not completely understood. In fact, there’s a chief scientist at Nasa still working to discover just how its creators managed to overcome the considerable aerodynamic challenges that should have made it unflyable.

The Ho 229's design was incredibly advanced for its time (Credit: Malyszkz/Wikipedia/)
The ‘flying wing’ design isn’t an everyday sight in our skies because it’s incredibly hard to make work. By getting rid of the tail – which helps keep the aircraft stable and stops it ‘yawing’ from side to side – the aircraft is a lot harder to control. So why would you try to build something that was inherently difficult to fly?
If you can make a flying wing work, it has several benefits. The resulting plane becomes difficult to spot on radar, partly because it has no tail fins that will bounce back radar waves. The smooth shape also means the aircraft has as little drag as possible, which means it can be lighter and more fuel-efficient, and possibly fly faster than a more conventionally shaped aircraft using the same engine.
The Hortens developed their flying wing approach with increasingly effective results
All of that looks good on paper – but getting it to work in the real world is a lot more difficult. Flying wings have proved to be a headache for aircraft designers stretching back almost to the time of the Wright Brothers. All of which makes the achievements of the German Horten brothers so impressive.
The Hortens – Walter and Reimar – began designing aircraft in the early 1930s, while Germany was officially banned from having an air force under the constraints of the Treaty of Versailles following World War One. The brothers had joined sporting air clubs, set up as a way to get around such restrictions, and which were a foundation for what could become Nazi Germany’s air force, the Luftwaffe.
Many of the amateur aviators who would later become Luftwaffe pilots cut their teeth flying various gliders and ‘sailplanes’, unpowered aircraft which taught them the rudiments of flying. The Horten brothers combined flying with designing aircraft as well – turning the family’s lounge-room into a workshop to work on new designs, according to the aviation website Aerostories.
New fighter
The pair followed some of the esoteric ideas of unconventional aircraft designer Frederich Lippisch, who was a pioneer of delta-wing aircraft designs; another radical form that came into its own once jet engines had been developed. The Hortens developed their flying wing approach with increasingly effective results, ending in their Horten Ho IV glider, in which the pilot lay prone in the aircraft, which meant the cockpit canopy didn’t jut so far out from the fuselage and create aerodynamic drag.
%%%%% By the time the Ho IV glider was being tested, Walter Horten had already served as a Luftwaffe fighter pilot during the Battle of Britain. Russ Lee, a curator at the Smithsonian Air and Space Museum in Washington DC, says this was a turning point. “The Germans, of course, lost the Battle of Britain, and Walter realised that Germany needed a new kind of fighter aircraft. And an all-wing aircraft might make that good new fighter.”
At the same time, the head of the Luftwaffe, Hermann Goring, had requested designs in a project called ‘3x1000’ – aircraft that would be able to carry a 1,000kg (2,200lb) bombload 1,000 miles (1,600 kilometres) at 1,000km/h (625mph). That led the Hortens to develop what would eventually become the Ho 229 prototypes. The first of the three prototypes was an unpowered glider, built to test the aerodynamic design. The second added jet engines, and flew successfully on 2 February 1945, though it crashed after engine failure on another test flight a few weeks later, killing its test pilot. But the tests proved, says Lee, that the aircraft could take off, cruise and land, and the aircraft’s basic design was sound.

The prototype Ho 229 is currently undergoing restoration (Credit: BrettC23/Wikipedia/CC BY-SA 4.0)
Lee has a good reason to know the Ho 229 backstory so well; he’s responsible for preserving and restoring the only other Ho 229 to have been built, the third, partially completed prototype, known as the Ho 229 V3. It was taken – like many other examples of cutting-edge German aircraft design – to the US after World War Two. Along the way, it spent a brief time at the British testing facility at Farnborough, near London.
“The word revolutionary is not inappropriate when you’re talking about the Ho 229,” says Lee. “The Hortens were more advanced in this area than anyone else in the world.”
The Northrop B-2, the aircraft that is at the forefront of the US nuclear deterrent, looks at first glance like an obvious descendent of the Hortens’ design genius. So much so, that some commentators described the Ho 229 as the “world’s first stealth bomber” – though its role would have been to shoot down the fleets of Allied bombers that were attacking German industrial targets and cities.
One of the hardest things is getting an aircraft without a tail to be flyable during a stall – Russ Lee, Smithsonian Air and Space
“Just getting one of these things to fly, well you had to make the wing do all the work, and end up with a plane that behaved as well as a conventional plane with a tail.”
Besides the tendency to “yaw” side to side at the best of times, a tailless plane can become virtually uncontrollable when the engine cuts out. “One of the big things with this aircraft was its stability in flight. One of the hardest things is getting an aircraft without a tail to be able to be flyable during a stall, and that’s something every aircraft has to be able to complete,” says Lee.
The Hortens were able to keep their aircraft stable by making the wing long and thin (known as a high aspect ratio wing). This spread the weight of the aircraft over a greater surface area, and also decreases the proportion of air that creates a vortex around the wing – a mini whirlwind that creates drag – slowing the aircraft down.
Radical shape
Reimar Horten may not have been fully aware that he was solving these two crucial aerodynamic problems in one fell swoop. That’s what Al Bowers, a Nasa chief scientist at the Neil A Armstrong Flight Research Center in California believes. Bowers has been testing the Hortens’ design principles for many years. Bowers says Reimar Horten’s genius was in using a ‘bell-shaped’ wing to cancel out the yawing issues an aircraft without a tail usually suffers, but which also reduced drag.
$#
“The Ho 229 was decades ahead of its time,” says Bowers. “I believe it will be shown as the progenitor of the future of aviation.”
Flying wing designs gained some credence in the 1950s, mostly due to the efforts of Jack Northrop, who had been inspired by seeing some of the Horten’s sports gliders in the 1930s. The captured Ho 229 may also have encouraged him. Northrop’s unsuccessful YB-35 flying wing bomber design of the late 1940s, was hamstrung by massive vibration problems caused by the propeller-driven engines, showing that the Hortens were right to have used jets in the Ho 229. Northrop’s later jet-propelled YB-49 design used jet engines, and while it never went into service, it paved the way for the company’s B-2 Spirit stealth bomber decades later, a design which certainly shares some physical similarities with the Ho 229.
Reimar Horten was on the right track. He never saw the full potential of his ideas – Al Bowers, Nasa
Bowers has been using the principles in the Ho 229 and from Prandtl’s earlier experiments into a Nasa design, the Prandtl-D flying wing concept, an unmanned flying wing design that could one day be used to explore Mars.
The Prandtl-D would be used on Martian research missions, possibly launched from a high-altitude glider, flying under its own power for some 10 minutes before gliding down to land on the planet’s surface. The Prandtl-D won’t be anywhere as big as the Ho 229 however – it’s expected to have a wingspan of only 2ft and weigh little more than 1.3kg (3lb).

The Ho 229's design has influenced a Nasa project for a small flying wing which could explore Mars (Credit: Tom Tschida/Nasa)
“We believe that Prandtl’s solution (and Horten’s) is the answer we’ve been looking for all along,” says Bowers. “It explains so many things about the flight of birds, and minimising drag, and superior efficiency possible in future aircraft. It is my belief that we can improve aircraft efficiency by at least 70%. And my own work is just a scratch of the surface. Reimar Horten was on the right track. He never saw the full potential of his ideas. Yet I suspect if he could see where we are today, he would be pleased. Perhaps not so pleased by the pace of our progress, but that we are finally listening.”
As for the Smithsonian’s example of this inspired design? Lee says the work to preserve this pioneering design is gradual and painstaking, and unlikely to be finished until the early 2020s. Then, this inspiring, overlooked design will be on public display – and the Hortens’ aerodynamic genius can be appreciated by a wider audience.

Digital connection of human brains : Internet of human beings

Vocal cords were overrated anyway. A new Army
grant aims to create email or voice mail and send
it by thought alone. No need to type an e-mail,
dial a phone or even speak a word.
Known as synthetic telepathy, the technology is
based on reading electrical activity in the brain
using an electroencephalograph, or EEG . Similar
technology is being marketed as a way to control
video games by thought.
"I think that this will eventually become just
another way of communicating," said Mike
D'Zmura, from the University of California, Irvine
and the lead scientist on the project.
"It will take a lot of research, and a lot of time, but
there are also a lot of commercial applications,
not just military applications," he said.
The idea of communicating by thought alone is
not a new one. In the 1960s, a researcher
strapped an EEG to his head and, with some
training, could stop and start his brain's alpha
waves to compose Morse code messages.
The Army grant to researchers at University of
California, Irvine, Carnegie Mellon University and
the University of Maryland has two objectives. The
first is to compose a message using, as D'Zmura
puts it, "that little voice in your head."
The second part is to send that message to a
particular individual or object (like a radio), also
just with the power of thought. Once the message
reaches the recipient, it could be read as text or
as a voice mail.
While the money may come from the Army and its
first use could be for covert operations, D'Zmura
thinks that thought-based communication will find
more use in the civilian realm.
"The eventual application I see is for students
sitting in the back of the lecture hall not paying
attention because they are texting," said D'Zmura.
"Instead, students could be back there, just
thinking to each other."
EEG-based gaming devices are large and fairly
conspicuous, but D'Zmura thinks that eventually
they could be incorporated into a baseball hat or
a hood.
Another use for such a system is for patients with
Lou Gehrig's disease, or ALS. As the disease
progresses, patients have fully functional brains
but slowly lose control over their muscles.
Synthetic telepathy could be a way for these
patients to communicate.
One of the first areas for thought-based
communication is in the gaming world, said Paul
Sajda of Columbia University.
Commercial EEG headsets already exist that allow
wearers to manipulate virtual objects by thought
alone, noted Sajda, but thinking "move rock" is
easier than, say, "Have everyone meet at
Starbucks at 5:30."
One difficulty in composing specific messages is
fundamental — EEGs are not very specific. They
can only locate a signal to within about one to
two centimeters. That's a large distance in the
brain. In the brain's auditory cortex, for example,
two centimeters is the difference between low
notes and high notes, D'Zmura said.
Placing electrodes between the skull and the brain
would offer more precise readings, but it is
expensive and requires invasive surgery.
To work around this problem, the scientists need
to gain a much better understanding of what
words and phrases light up what brain sections.
To create a detailed map of the brain scientists
will also use functional magnetic resonance
imaging (fMRI) and magnetoencephalography
(MEG) .
Each technology has its own strengths and
weaknesses. EEGs detect brain activity only on
the outer bulges of the brain's folds. MEGs read
brain activity on the inner folds but are too large
to put on your head. FMRIs detect brain activity
more accurately than either but are heavy and
expensive.
Of all three technologies EEG is the one currently
cheap enough, light enough and fast enough for a
mass market device.
The map generated by all three technologies will
help the computer guess which word of phrase a
person means when a part of the brain is lights
up on the EEG. The idea is similar to how
dictation software like Dragon NaturallySpeaking
uses context to help determine which word you
said.
Mapping the brain's response to most of the
English language is a large task, and D'Zmura
says that it will be 15-20 years before thought-
based communication is reality. Sajda, who is on
sabbatical in Japan to research using EEGs to
scan images rapidly, sounded skeptical but
excited.
"There are technical hurdles that need to be
ovecome first, but then again, 20 years ago
people would have thought that the two of us
talking to each other half a world away over
Skype (and Internet-based phone service) was
crazy," said Sajda.
To those who might be nervous about thought-
based communication turning into a sci-fi
comedy of errors, D'Zmura says not to worry.
Mind-message composition would take specific
conscious thoughts and training to develop them.
The device would also have a on/off switch.
"When I was a kid I occasionally said things that
were inappropriate, and I learned not to do that,"
said D'Zmura. "I think that people would learn to
think in a way the computer couldn't interpret. Or
they can just switch it off."

Never forget to use protected RFID Credit card / Debit card to guarantee your maximum Financial security against fraud.

A team of researchers from Massachusetts Institute of Technology (MIT) have developed a new type of radio frequency identification (RFID) chip that is virtually impossible to hack according to them. The chip if introduced by credit card companies will prevent your credit card number or key card information from being stolen.

The chip prevents so-called side-channel attacks, which analyses patterns of memory access or fluctuations in power usage when a device is performing a cryptographic operation, in order to extract its cryptographic key.

Chiraag Juvekar, graduate student in electrical engineering at Massachusetts Institute of Technology (MIT), explained what the new RFID chip does,

“The idea in a side-channel attack is that a given execution of the cryptographic algorithm only leaks a slight amount of information.” He added,”So you need to execute the cryptographic algorithm with the same secret many, many times to get enough leakage to extract a complete secret.”

According to MIT, this bolstered RFID chip would generate random numbers, producing a new secret cryptographic key with each transaction. A central server would then use the same random-number generator to keep up with the ever-changing keys.

Such a system would still, however, be vulnerable to a “power glitch” attack in which the RFID chip’s power would be repeatedly cut right before it changed its secret key. An attacker could then run the same side-channel attack thousands of times, with the same key.

Two design innovations allow the MIT researchers’ chip to thwart power-glitch attacks. One is an on-chip power supply whose connection to the chip circuitry would be virtually impossible to cut and the other is a set of “nonvolatile” memory cells that can store whatever data the chip is working on when it begins to lose power.

Juvekar and Anantha Chandrakasan, professor of electrical engineering and computer science and others used a special type of material known as a ferroelectric crystals for these features. Ferroelectric crystals are already used by big chip makers like Texas Instruments to produce nonvolatile memory or computer memory that retains data when it’s powered off.

The MIT research team has collaborated with Texas Instruments to built several prototypes of the new hack-proof RFID chip. They have already presented their research at the “International Solid-State Circuits Conference” held in San Francisco recently.

If the chips made by MIT researchers are commercially made, it could mean that an identity thief could not steal credit card numbers or key card information, and high-tech burglars could not swipe expensive goods from a warehouse and replace them with dummy tags.

Intelligent trained birds now used in fighting illegal drones.

The Dutch National Police is training eagles to capture drones flown by criminals and terrorists into restricted areas. A video reveals it is training these birds of prey to catch the menacing machines in mid-air, taking them down in one fell swoop.

The police, counter-terrorism agency NCTB and the ministries of justice and defence are working on a range of measures to combat drones, broadcaster Nos says.

Mark Wiebes, Innovation Manager of the National Unit of the police, said drone use is becoming more common, with people using them to take photographs, for example.

But they can be dangerous if they fall from the sky above crowds of people. Drones with built-in cameras also pose privacy risks.

‘There are situations in which drones are not allowed to fly. This has almost always to do with security,’ he added.

As you can see in the video above, the eagle quickly grabs the drone, seizing control from whoever is holding the remote and bringing it to the ground. The project is still in test phase but a spokesman said there was a ‘very real possibility’ that birds of prey could be used.

‘The bird sees the drone as prey and takes it to a safe place, a place where there are no other birds or people,’ Weibes said. ‘That is what we are making use of in this project.’

‘Everyone can get hold of a drone, and that includes people who want to misuse them,’ police spokesman Michel Baeten told Nos. ‘It is a multifunctional piece of equipment and that means you can launch an attack with them as well.’

The eagles have been sourced from raptor training company Guard From Above, a bird of prey training company in Denmark to test the raptors’ intelligence and accuracy. This means that they are going to be well looked after and comfortable with their handlers.

The eagles are being trained to identify and catch quadcopters, which are proving increasingly popular.

Mr Wiebe explained: ‘The bird sees the drone as prey and takes it to a safe area, a place where he does not suffer from other birds or humans.

‘We use [this instinct]in this project.’

However, there is one issue with an eagle going up against a drone is the potential for injury. The multiple rotors each drone has spin very fast and point upwards, meaning they could slice into the eagle’s leg or talons. In the video, one of the handlers says that the scales on the eagle’s legs and feet keeps them safe.

To allay this concern, there is a chance the eagles could wear armour while patrolling the skies for drones, IEEE’s Spectrum reported.

As well as birds of prey, officials are also looking into the use of high-tech detention systems and equipment which can remotely take over control of a drone. Another potential measure could be a drone which is programmed to fire at or capture an enemy drone.

According to the Dutch Police, these tests should last a few months, at which point they will decide whether using the eagles in this way is an effective and appropriate means of preventing unwanted drone use.

What is Intel going to do do with Optical FPGA chip

The recent acquisition of Altera, the pioneer of programmable logic chips, for US$16.7 billion by the well-known chip maker Intel provides clear recognition of the perceived importance of field-programmable gate array (FPGA) technology. In essence, an FPGA chip is a universal signal processing chip that can be programmed or configured after fabrication to perform a specific task — be it speech recognition, computer vision, cryptography, or something else.

Originally commercialized in the mid-1980s, by two US Silicon Valley firms Altera and Xilinx (who today between them hold an ~80% share of the market), the FPGA chip has grown from humble origins and niche applications to ubiquity. The technology is found inside everything from digital cameras and mobile phones through to sophisticated medical imaging devices, telecommunications equipment and robotics. In the heart of an FPGA is a large array of logic blocks that are wired up by reconfigurable interconnects, allowing the chip to be reconfigured or programmed via specialized software. The use of a standard common hardware platform makes FPGAs far more flexible and cost effective compared with application specific integrated circuits (ASICs) — complex chips that are custom designed for a specific task.

What's potentially exciting is that there are now signs that the optical equivalent of an FPGA is on the horizon. Improvements in both silicon photonics and III–V compound semiconductor technology, such as InP and GaAs, mean that optical researchers are starting to build designs of programmable optical signal processors on a chip by cascading arrays of coupled waveguide structures that feature phase shifters to control the flow of light through the array and thus support reconfigurability. The theory of how such arrays behave has been analysed in depth by David Miller from Stanford University in the US who has published several papers on the topic.

Research teams around the globe specializing in microwave photonics, including Jianping Yao's group at the University of Ottawa, Canada, José Capmany's at Valencia, Spain and Arthur Lowery's group at Monash University in Australia, and others are building experimental protoypes with encouraging results. As an example, the News and Views on page 6 of this issue describes an optical integrated circuit composed of a mesh of interconnected Mach–Zehnder interferometers, which acts as a fully programmable filter for radiofrequency (RF) signals.

The prospect of an optical equivalent to the FPGA excites many in the photonics community. “Similar to the invention of electronic FPGAs in 1985, the availability of large-scale programmable optical chips would be an important step forwards towards ultrafast and wide-band signal processing,” says Yao. “Currently, digital signal processing speed is limited by the speed of analog-to-digital conversion (ADC). The world's fastest ADC [made by Texas Instruments] can operate at 1 giga-samples per second, which corresponds to a bandwidth of 500 MHz. For a large-scale programmable optical chip, the processing bandwidth can be 1,000 times wider, hundreds of gigahertz.”

Yao says that the ultrafast processing capabilities of optical chips could be useful for ultrahigh-speed ADC, all-optical signal processing in communications networks, or fast image processing.

“In principle, the concept of a universal programmable processor should unlock a considerable number of applications,” commented Capmany. “For example, in the case of RF photonics this could be RF filtering, arbitrary waveform generation, beam steering, instantaneous frequency measurements, analog-to-digital conversion.”

He says that at present, the design of optical circuits to perform a specific task is leading to a situation where there are almost as many technologies as there are applications. This fragmentation hinders cost-effective, mass-volume manufacture of a photonic solution, a situation that an optical programmable chip would help remed

Follow me on Social Networks

Popular Posts

Google+ Badge

Like our posts? Subscribe to get GridSpot news

Social

Featured Post

Transhumanism: World’s First Cyborg Neil Harbisson wanted to be able to understand color, so he drilled a hole into his head

Neil Harbisson (born 27 July 1984)  a Catalan-raised, British-born avant-garde artist and cyborg activist based in New York City says...

Sponsord

Popular Posts