Archive

Archive for the ‘Technology Matters’ Category

Secure Communications Thanks to Quantum Physics

March 12, 2014 Leave a comment

By: Christian J. Meier


 

One of the recent revelations by Edward Snowden is that the U.S. National Security Agency is currently developing a quantum computer. Physicists aren’t surprised by this news; such a computer could crack the encryption that is commonly used today in no time and would therefore be highly attractive for the NSA.

Professor Thomas Walther of the Institute of Applied Physics at the Technical University of Darmstadt is convinced that “Sooner or later, the quantum computer will arrive.” Yet the quantum physicist is not worried. After all, he knows of an antidote: so-called quantum cryptography. This also uses the bizarre rules of quantum physics, but not to decrypt messages at a record pace; quite the opposite — to encrypt it in a way that cannot be cracked by a quantum computer. To do this, a “key” that depends on the laws of quantum mechanics has to be exchanged between the communication partners; this then serves to encrypt the message. Physicists throughout the world are perfecting quantum cryptography to make it suitable for particularly security-sensitive applications, such as for banking transactions or tap-proof communications. Walther’s Ph.D. student Sabine Euler is one of them.

As early as the 1980s, physicists Charles Bennett and Gilles Brassard thought about how quantum physics could help transfer keys while avoiding eavesdropping. Something similar to Morse code is used, consisting of a sequence of light signals from individual light particles (photons). The information is in the different polarizations of successive photons. Eavesdropping is impossible due to the quantum nature of photons. Any eavesdropper will inevitably be discovered because the eavesdropper needs to do measurements on the photons, and these measurements will always be noticed.

“That’s the theory” says Walther. However, there are ways to listen without being noticed in practice. This has been demonstrated by hackers who specialize in quantum cryptography based on systems already available on the market. “Commercial systems have always relinquished a little bit of security in the past,” says Walther. In order to make the protocol of Bennett and Brassard reality, you need, for example, light sources that are can be controlled so finely that they emit single photons in succession. Usually, a laser that is weakened so much that it emits single photons serves as the light source. “But sometimes two photons can come out simultaneously, which might help a potential eavesdropper to remain unnoticed” says Walther. The eavesdropper could intercept the second photon and transmit the first one.

Therefore, the team led by Sabine Euler uses a light source that transmits a signal when it sends a single photon; this signal can be used to select only the individually transmitted photons for communication. Nevertheless, there are still vulnerabilities. If the system changes the polarization of the light particles during coding, for example, the power consumption varies or the time interval of the pulses changes slightly. “An eavesdropper could tap this information and read the message without the sender and receiver noticing” explains Walther. Sabine Euler and her colleagues at the Institute of Applied Physics are trying to eliminate these vulnerabilities. “They are demonstrating a lot of creativity here” says Walther approvingly. Thanks to such research, it will be harder and harder for hackers to take advantage of vulnerabilities in quantum cryptography systems.

The TU Darmstadt quantum physicists want to make quantum cryptography not only more secure, but more manageable at the same time. “In a network in which many users wish to communicate securely with each other, the technology must be affordable,” he says. Therefore, his team develops its systems in such a manner that they are as simple as possible and can be miniaturized.

The research team is part of the Center for Advanced Security Research Darmstadt (CASED), in which the TU Darmstadt, the Fraunhofer Institute for Secure Information Technology and the University of Darmstadt combine their expertise in current and future IT security issues. Over 200 scientists conduct research in CASED, funded by the State Initiative for Economic and Academic Excellence (LOEWE) of the Hessian Ministry for Science and the Arts. “We also exchange information with computer scientists, which is very exciting,” says Walther.

After all, the computer science experts deal with many of the same issues as Walther’s quantum physicists. For example, Johannes Buchmann of the department of Computer Science at the TU Darmstadt is also working on encryption methods that theoretically cannot be cracked by a quantum computer. However, these are not based on quantum physics phenomena, but rather on an unsolvable math problem.

Therefore, it may well be that the answer to the first code-cracking quantum computer comes from Darmstadt.

Bizarre quantum physics and encryption

A quantum computer could quickly crack current encryptions because it can test very many possibilities simultaneously, in the same way as if you could try all possible variations for a password at once. After all, according to the quantum physics principle of superposition, atoms, electrons or photons can have several states simultaneously; for example, they can rotate clockwise and counterclockwise at the same time.

However, if you were to measure a property of a particle, such as the direction of rotation, the superposition is lost. This phenomenon is useful for quantum cryptography. Eavesdroppers inevitably betray themselves because their measurements of the photon change the photon’s characteristics. Moreover, quantum physics forbids them to copy the photon with all its properties. Therefore, they cannot siphon off any information to retransmit the uninfluenced photons on to the sender of the message.


Advertisements

Need a New Heart, We Can Print It!

February 12, 2014 Leave a comment

New Advance in 3-D Printing and Tissue Engineering Technology

 Date: February 10, 2014

Source: Brigham and Women’s Hospital

Summary: Researchers have introduced a unique micro-robotic technique to assemble the components of complex materials, the foundation of tissue engineering and 3-D printing.

Described in the Jan. 28, 2014, issue of Nature Communications, the research was conducted by Savas Tasoglu, PhD, MS, research fellow in the BWH Division of Renal Medicine, and Utkan Demirci, PhD, MS, associate professor of Medicine in the Division of Biomedical Engineering, part of the BWH Department of Medicine, in collaboration with Eric Diller, PhD, MS, and Metin Sitti, PhD, MS, professor in the Department of Mechanical Engineering, Carnegie Mellon University.

Tissue engineering and 3D printing have become vitally important to the future of medicine for many reasons. The shortage of available organs for transplantation, for example, leaves many patients on lengthy waiting lists for life-saving treatment. Being able to engineer organs using a patient’s own cells can not only alleviate this shortage, but also address issues related to rejection of donated organs. Developing therapies and testing drugs using current preclinical models have limitations in reliability and predictability. Tissue engineering provides a more practical means for researchers to study cell behavior, such as cancer cell resistance to therapy, and test new drugs or combinations of drugs to treat many diseases.

The presented approach uses untethered magnetic micro-robotic coding for precise construction of individual cell-encapsulating hydrogels (such as cell blocks). The micro-robot, which is remotely controlled by magnetic fields, can move one hydrogel at a time to build structures. This is critical in tissue engineering, as human tissue architecture is complex, with different types of cells at various levels and locations. When building these structures, the location of the cells is significant in that it will impact how the structure will ultimately function. “Compared with earlier techniques, this technology enables true control over bottom-up tissue engineering,” explains Tasoglu.

Tasoglu and Demirci also demonstrated that micro-robotic construction of cell-encapsulating hydrogels can be performed without affecting cell vitality and proliferation. Further benefits may be realized by using numerous micro-robots together in bioprinting, the creation of a design that can be utilized by a bioprinter to generate tissue and other complex materials in the laboratory environment.

“Our work will revolutionize three-dimensional precise assembly of complex and heterogeneous tissue engineering building blocks and serve to improve complexity and understanding of tissue engineering systems,” said Metin Sitti, professor of Mechanical Engineering and the Robotics Institute and head of CMU’s NanoRobotics Lab.

“We are really just beginning to explore the many possibilities in using this micro-robotic technique to manipulate individual cells or cell-encapsulating building blocks.” says Demirci. “This is a very exciting and rapidly evolving field that holds a lot of promise in medicine.”

___________________________

Brigham and Women’s Hospital. “New advance in 3-D printing and tissue engineering technology.” ScienceDaily. ScienceDaily, 10 February 2014.

NanoTechnology, NanoBots, and Computer Manufacturing

January 24, 2012 Leave a comment


In the early 1990’s, BASF televised a series of commercials touting their ability to make products stronger, brighter, better. [BASF Commercial on YouTube] BASF claimed this was made possible due to BASF chemical engineering. It turns out this was only partly accurate. In fact, BASF accomplished these feats by ahead-of-their-time product engineering incorporating early implementations of consumer product nanotechnology.

Nanotechnology provides for functional capability in the realm of atoms and contributes intrinsically to the strength, durability, and functional characteristics of molecular structures.

To understand the world of nanotechnology one has to come to an understanding of the units of measurement involved. One centimeter is one-hundredth of a meter, a millimeter is one-thousands of a meter and a micrometer is one millionth of a meter. As small as some of these measurements may seem, they are huge when compared to the nanoscale. A nanometer (nm) is one billionth of a meter which is even smaller than the wavelength of visible light and a hundred-thousandth the width of a human hair.

Wikipedia says this regarding nanotechnology: “Nanotechnology (sometimes shortened to  “nanotech”) is the study of manipulating matter on an atomic and molecular scale. Generally, nanotechnology deals with developing materials, devices, or other structures possessing at least one dimension sized from 1 to 100 nanometers. Quantum mechanical effects are important at this quantum-realm scale.

Nanotechnology is very diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to investigating whether we can directly control matter on the atomic scale. Nanotechnology entails the application of fields of science as diverse as surface science, organic chemistry, molecular biology, semiconductor physics, microfabrication, etc.

There is much debate on the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in medicine, electronics, biomaterials and energy production. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.”

It appears the field of nanotechnology promises to deliver as much for future product development excitement as it does for concerns about its possibilities and uses.

Graphene

Graphene is an allotrope of carbon, whose structure is exactly one-atom-thick planar sheets of bonded carbon atoms that are densely packed in a honeycomb crystal lattice. Graphene is most easily visualized as an atomic-scale chicken wire made of carbon atoms.

What graphene nanotechnology can do is that it can replace the silicon transistors which are now in your computer with transistors which are based on nanotubes. It has been discovered that carbon nanotubes can be used to produce smaller and faster components. The idea is that if the silicon in the channel is exchanged with a carbon nanotube then the transistors can be made smaller and faster. By its very nature, graphene contributes a perfect foundational structure for the construction of these nanotubes.

Most recently, nano-physicists in Copenhagen, Denmark have made a discovery which can change the way data is stored on computers. Using graphene slices as “nanotubes”, they have discovered that by placing nanotubes between magnetic electrodes the direction of a single electron spin caught on the nanotube can be controlled by an electric potential. Called “Spintronics”, this new development has already been hailed as the breakthrough sought to re-define the manner with which information is stored, manipulated, and retrieved in future computing devices.

This new discovery will make it possible to combine electricity and magnetism in a new transistor concept. In their experiments the nano-physicists use carbon nanotubes as transistors. This new nanoscale structure will speed up computers, exponentially.

Perhaps the most thrilling future possibility nanotechnology is the creation of the nanobot, a still-hypothetical molecular robot. These nanobots have several key properties. First, they can reproduce themselves. If they can reproduce once, then they can, in principle, create an unlimited number of copies of themselves; it will simply take creating the first. Second, they are capable of identifying molecules and cutting them up at precise points. Third, by following a master code, they are capable of reassembling these atoms into different arrangements. Once constructed, nanobots will provide a means for true automation of manufacturing processes. What will begin with the fabrication and manipulation of molecules will evolve into the replication of larger and larger organic and non-organic systems. Ultimately, nanobots will become the basis of most, if not all, product (including computing devices) manufacturing.

Nanobots do not exist now, and will not until sometime in the future, but once the first nanobot is successfully produced, it will most certainly and fundamentally alter society as we know it.

Whether the impact to society is minimal or substantial, there is no question that nanobot technology will completely transform all types of manufacturing including the manner in which computing devices are designed and built.

Moore’s Law

Moore’s law describes a long-term trend in the history of computing hardware. Simply stated, Moore’s Law asserts that the number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years. The capabilities of many digital electronic devices are strongly linked to Moore’s law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. An inexplicable effect of the law is that all of these are improving at (roughly) exponential rates as well.This exponential improvement has dramatically enhanced the impact of digital electronics in nearly every segment of the world economy. Accordingly, Moore’s law describes a driving force of technological and social change at play in the global economy in the late 20th and early 21st centuries. Though there are those who see the effect of Moore’s Law as having negative consequences for various segments of the world’s environment, the positive effects of the technological, medical, and social engineering breakthroughs resulting from these advancements are legendary.But, we are closing in on the end of this technological watershed. The engineering sciences are in complete agreement: by the year 2020 (some predict as early as 2015), digital manufacturing will reach the point at which further miniaturization of transistors on integrated circuits becomes impossible. Capacity will be reached and exceeded.

Nanotechnology to the rescue! As renowned physicist Richard Feynman suggested in a speech he gave way back in 1959, “there’s plenty of room at the bottom”. Nanotechnology engineering experiments occurring today will take miniaturization to a scale thousands of times smaller than what is currently possible.As the invention of the transistor ushered in our current digital age, nanotechnology will bring about dramatic and far-reaching changes – changes to our technology, our products, and to the way we live.

Did it Work?!

January 19, 2012 Leave a comment

Yesterday wasn’t just a day for SOPA-protesting Web sites to darken their sites or even make them unavailable. As the news cycle unfolded, there were many statements issued by prominent executives and politicians on the matter. Here’s a look at some of the comments they made:


Mark Zuckerberg, CEO, Facebook
:

The internet is the most powerful tool we have for creating a more open and connected world. We can’t let poorly thought out laws get in the way of the internet’s development. Facebook opposes SOPA and PIPA, and we will continue to oppose any laws that will hurt the internet.

The world today needs political leaders who are pro-internet. We have been working with many of these folks for months on better alternatives to these current proposals. I encourage you to learn more about these issues and tell your congressmen that you want them to be pro-internet.

Sen. Ron Wyden (D., Ore.):

The Internet has become an integral part of everyday life precisely because it has been an open-to-all land of opportunity where entrepreneurs, thinkers and innovators are free to try, fail and then try again. The Internet has changed the way we communicate with each other, the way we learn about the world and the way we conduct business. It has done this by eliminating the tollgates, middle men, and other barriers to entry that have so often predetermined winners and losers in the marketplace. It has created a world where ideas, products and creative expression have an opportunity regardless of who offers them or where they originate.

Protect IP (PIPA) and the Stop Online Piracy Act (SOPA) are a step towards a different kind of Internet. They are a step towards an Internet in which those with money and lawyers and access to power have a greater voice than those who don’t. They are a step towards an Internet in which online innovators need lawyers as much or more than they need good ideas. And they are a step towards a world in which Americans have less of a voice to argue for a free and open Internet around the world.

Legal Team, Red Hat Software:

In a single generation, the Internet has transformed our world to such an extent that it is easy to forget its miraculous properties and take it for granted. It’s worth reminding ourselves, though, that our future economic growth depends on our ability to use the Internet to share new ideas and technology. Measures that block the freedom and openness of the Internet also hinder innovation. That poses a threat to the future success of Red Hat and other innovative companies.

The sponsors of SOPA and PIPA claim that the bills are intended to thwart web piracy. Yet, the bills overreach, and could put a website out of business after a single complaint. Web sites would vanish, and have little recourse, if they were suspected of infringing copyrights or trademarks.

The good news is that there is growing opposition from many quarters to these bills. Just this past weekend, the White House expressed serious concerns, opposing legislation — like SOPA and PIPA — that “reduces freedom of expression, increases cybersecurity risk, or undermines the dynamic, innovative global Internet.

Lanham Napier, CEO, Rackspace:

In my last blog post on SOPA and PIPA, I explained why Rackspace — along with much of the Internet community — opposes these bills in their current form. They are well-intentioned, but would do more harm than good. Their enforcement provisions could be easily evaded, and they would undermine the security and stability of the Internet.

Since then, I and other Rackers have been working with key lawmakers to fix the bills so that they will (a) actually be effective in fighting online piracy, and (b) avoid disrupting the Internet or imposing unreasonable costs on Internet users and service providers.

We at Rackspace are on the front lines of the battle against copyright infringers and other online criminals. We employ dedicated teams that take enforcement actions under the Digital Millennium Copyright Act as well as our own strict Acceptable Use Policy every day. We agree that better tools are needed for this fight but SOPA and PIPA do not fit the bill.


Gary Shapiro, President and CEO, Consumer Electronics Association
:

“It is increasingly clear that bills causing collateral damage to innovation in the guise of fighting piracy are not politically viable. Now that unreasonable solutions to piracy have been shown not to work, it is time to explore reasonable ones. We urge policymakers to join CEA in support of the OPEN Act — a bicameral, bipartisan and narrowly targeted approach to fighting foreign “rogue websites.”

Paul Hortenstine, Motion Picture Association of America, which supports the bills:

The legislation targets criminals: foreign thieves who profit from pirated content and counterfeit goods. These foreign rogue websites are operating freely today while legitimate American businesses are opposing legislation that would block these criminal websites from the American market.

The Pirate Bay, a site that links visitors to pirated content and would arguably fit someone’s definition of “foreign rogue Web site”:

SOPA can’t do anything to stop TPB. Worst case we’ll change top level domain from our current .org to one of the hundreds of other names that we already also use. In countries where TPB is blocked, China and Saudi Arabia springs to mind, they block hundreds of our domain names. And did it work? Not really.

To fix the “problem of piracy” one should go to the source of the problem. The entertainment industry say they’re creating “culture” but what they really do is stuff like selling overpriced plushy dolls and making 11 year old girls become anorexic. Either from working in the factories that creates the dolls for basically no salary or by watching movies and tv shows that make them think that they’re fat.

Is The Laptop Dead? Yup

January 9, 2012 Leave a comment
[Reprint Alert: Original Here]

BY KIT EATON

2012 is thought to be the year of the Ultrabook, but though these slim machines may prove successful they can’t disguise one odd fact: The laptop is a dead design. When will it actually pass away and leave room for a future device?

Intel has been pushing a reference design on Eastern manufacturers for months now, and the pressure is finally paying off. Maker after maker has revealed its own take on what’s dubbed the Ultrabook. Consumers may be pleased by the focus on high design, Intel will be pleased it has a new vehicle for its processors, and manufacturers will be pleased they have a seemingly new toy to promote and sell for profit. The Wall Street Journal has even written a piece on them: “For PCs, Hope in a Slim Profile,” and they’re predicted to be everywhere at CES 2012. The thing is the Ultrabook isn’t new, nor is it revolutionary. It’s proof that the laptop is now an evolutionary dead end in computer history.

A lightweight PC with long battery life, petite format, and full-featured PC functionality … that’s a rough description of an Ultrabook. Remember this, we’ll come back to it. But in essence the Ultrabook is a MacBook Air, only slightly more typically PC-like, and sporting some flavor of Microsoft Windows 7 aboard it as its OS. In the Mac versus PC war, this is perhaps the most complete example of a Mac design being cloned into a PC design paradigm–so much so that some Ultrabooks to be released are sure to attract the attention of Apple’s IP lawyers, so similar are they in shape, format, arrangement of ports and sockets, and color.

Apple’s innovation was to build an all-metal chassis (which actually permits the shape to be slimmer due to its monocoque structure) around a full-powered computer that lacks an optical drive and eschews a hard drive in favor of solid state drives that are faster and more power-friendly at the expense of large capacity, and favors only a few output ports. It’s a Jon Ive special, one might say–the Air is a laptop boiled down to its simplest essence, just a keyboard, screen, trackpad, and a few ports. The Air has become one of Apple’s fastest-selling machines, with users loving its almost instant-on speed, light but strong body, and pure, attractive design.

That’s what Intel is chasing, of course. The Ultrabook plan has hit a few flaws, with many early headlines suggesting makers were having difficulties meeting the Air’s $999 price point thanks to the raw cost of components and later headlines noting makers had to switch to alternative cheaper materials and forcing Intel to drop prices. But it looks like Intel’s effort will work out, and more and more ultrabooks will probably arrive in 2012. With Apple rumored to be leading the charge, bringing the Air format to a 15-inch laptop, the Ultrabook format will probably sway the design of the majority of laptops produced from 2012 onward. They will sell because they do offer significant benefits to users.

But remember that description of the Ultrabook? Almost to a word it fits an earlier laptop reinvention–the netbook. These cheap half-powered machines were incredibly popular a handful of years ago when the economic outlook was dim, and compared to the weighty “full” laptop, they seemed to offer a new degree of portability and extended battery life that promised new experiences to users.

They sold by the millions, but then the star faded: The economy picked up, users realized they weren’t fully capable machines that could in all circumstances substitute for the full-feature laptop of which they were a pale echo, and though the netbook is still on sale it’s now merely another type of computer on sale.

We are drawing the comparison between the two here–the Ultrabook is perhaps a more considered, full-featured version of the netbook.

But Apple’s Air is the touchstone for what may be a laptop design evolution, but it’s not a revolution in the same way the iPhone was to the smartphone business. The Air and the Ultrabook are merely the calm, polished peak of laptop design. There’s nothing extra, there’s nothing superfluous, they offer powerful processing, speedy responses, and longer battery life than you may have expected from their tote-friendly mass. But they still need laptop staples: a keyboard, a webcam, ports, wireless powers, a quality screen, and a pointing device–in Apple’s case the simplest most innovative implementation of the trackpad, in giant size.

There’s nowhere to go from here. How may one improve the Air into the Air II? It’s about as simple an edition of the laptop format–which Apple, to some extent, invented, that’s possible. By definition, the Ultrabook is the same. You may add features like a touchscreen or perhaps 3-D, a built-in pico-projector, or some other tricks, but that would be gilding the lily, and the essential format is the same. And it works–we’re all used to portable computing, and to using a keyboard and trackpad to control a windows/icons/mice/pointers user interface such as OS X or Windows 7.

And yes, if it ain’t broke … don’t fix it.

But it means the laptop is dead. There’s literally no place left to take it, innovatively. Makers will churn them out for several years yet, but they’ll be rewarmed editions of what we see in 2012. And when this sort of evolutionarly cul de sac is reached, it means one thing: Massive scope for an innovative new product to revolutionize portable computing for the consumer around the world. Shrewd industry observers will suggest the tablet PC is perfectly poised to slot into this niche: It has a totally new user experience, it lets consumers relate to computers in a wholly new and more intimate way, it offers new interactions that aren’t possible with the unweildy hinged format of a laptop–such as motion controlled gaming–and it’s a true go-anywhere device. If it evolves a little more past its current perceived “lightweight” computing uses, it’ll be an even stronger contender.

We’re not saying laptops are going to disappear momentarily. They’re still selling incredibly well, and they will do for some time. But the Utrabook isn’t the silver bullet to securing their future–they’re instead almost like a well-polished, perfectly refined full stop at the end of the design description of the device. Something better will soon hove into view, and we’ll love using it. That’s why the portable computing game is so hot, why there’s so much scope for innovation and that’s why the immediate future is so exciting.

 

Thank you, Google

December 17, 2011 Leave a comment

Giving back in 2011

As the holiday season approaches we thought it was a good moment to update you on some grants we’re making to support education, technology and the fight against modern day slavery.

STEM and girls’ education
Science, technology, engineering and math (STEM) open up great opportunities for young people so we’ve decided to fund 16 great programs in this area. These include Boston-based Citizen Schools and Generating Genius in the U.K., both of which work to help to expand the horizons of underprivileged youngsters. In total, our grants will provide enhanced STEM education for more than 3 million students.

In addition, we’re supporting girls’ education in the developing world. By giving a girl an education, you not only improve her opportunities, but those of her whole family. The African Leadership Academy provides merit scholarships to promising young women across the continent, and the Afghan Institute of Learning offers literacy classes to women and girls in rural Afghanistan. Groups like these will use our funds to educate more than 10,000 girls in developing countries.

Empowerment through technology
We’ve all been wowed by the entrepreneurial spirit behind the 15 awards in this category, all of whom are using the web, open source programming and other technology platforms to connect communities and improve access to information. Vittana, for instance, helps lenders offer loans to students in the developing world who have have a 99 percent repayment rate—potentially doubling or tripling a recipient’s earning power. Code for America enables the web industry to share its skills with the public sector by developing projects that improve transparency and encourage civic engagement on a mass scale. And Switchboard is working with local mobile providers to help African health care workers create networks and communicate for free.

Fighting slavery and human trafficking
Modern day slavery is a multi-billion dollar industry that ruins the lives of around 27 million people. So we’re funding a number of groups that are working to tackle the problem. For instance, in India, International Justice Mission (IJM), along with The BBC World Service Trust, Action Aid and Aide et Action, are forming a new coalition. It will work on the ground with governments to stop slave labor by identifying the ring masters, documenting abuse, freeing individuals and providing them with therapy as well as job training. Our support will also help expand the reach of tools like the powerful Slavery Footprint calculator and Polaris Project’s National Trafficking Hotline.

To learn more about these organizations and how you can get involved, visit our Google Gives Back 2011 site and take a look at this video:

These grants, which total $40 million, are only part of our annual philanthropic efforts. Over the course of the year, Google provided more than $115 million in funding to various nonprofit organizations and academic institutions around the world; our in-kind support (programs like Google Grants and Google Apps for Education that offer free products and services to eligible organizations) came to more than $1 billion, and our annual company-wide GoogleServe event and related programs enabled individual Googlers to donate more than 40,000 hours of their own volunteer time.

As 2011 draws to a close, I’m inspired by this year’s grantees and look forward to seeing their world-changing work in 2012.

Posted by Shona Brown, SVP Google.org

Quantum Computing: What is it; Has it Arrived

December 10, 2011 Leave a comment

Dr. Michiu Kaku, renowned theoretical physicist, author, and television personality, when referring to the potential of atomic scale computing writes, “The most ambitious proposal is to use quantum computers, which actually compute on individual atoms themselves. Some claim that quantum computers are the ultimate computer, since the atom is the smallest unit that one can calculate on. (Michiu Kaku, “Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100“)

Wikipedia defines a Quantum Computer as, “a device for computation that makes direct use of quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from traditional computers based on transistors. The basic principle behind quantum computation is that quantum properties can be used to represent data and perform operations on these data.”

In simpler language, quantum computing makes use of Atomic Scale Integration, or, making computations on atoms themselves. And, rather than referring to bits of information – the smallest discrete piece of manipulatable data – quantum information is measured in qubits.

Quantum Computing – Early to Present

Most physicists and computer manufacturers think quantum computers will replace the silicon chips currently in use. A similar evolution in processing technology occurred in the mid to late 1950’s when transistors replaced the technology used at the time, the vacuum tube.

Although evolutionary technology is needed to fully utilize the computing power of quantum mechanics, the development of this technology is already hotly contested.

The race began in 1998 when Los Alamos and MIT research partners were able to spread a discrete unit of information across many different nuclear oscillations, in a solution of acid molecules. The suspension allowed different states to be analyzed as quantum information. The race was on: to build a smaller and more functional quantum based platform to measure additional qubits.

It took a couple of years before the next step. This was accomplished by scientists at Los Alamos National Laboratory when they developed a quantum computer inside a drop of liquid! The manipulation of particles in the water molecule was quite interesting and produced a 7 qubit quantum phase, blowing away all previous quantum data manipulation achievements.

Fast forward six years, and Canadian company D-Wave was able to produce a 16 qubit quantum computer able to compute several complex patterns and identify matching systems. This work ushered in the present phase of technological advancements in the world of quantum computing and is often cited as the standard for future investigations.

On the heels of this accomplishment, Graphene was discovered in 2004. This substance has been hailed as a new kind of wonder substance — though it’s essentially a form of carbon, similar to pencil lead. Graphene is the king of small — it’s just one atom thick — and it’s highly conductive. Earlier in 2011 IBM built the first graphene circuit and now it says it can build graphene chips using production lines usually used for silicon, which bodes well for mass production.

Quantum Computing – The Promise

Technology changes and moves faster than most of us realize. The processing power that many companies have harnessed is faster than most ever thought possible, but as fast as our current computer technology is, it remains quite slow. The world’s fastest super computer, Japan’s K computer has approximately the processing power of one human brain. There are some things the K computer can do as fast as an average human, some things such as pattern recognition, more slowly. By comparison, and although we are still very far from mastering this application of quantum mechanics; researchers have estimated that a quantum computer no bigger than a laptop has the potential to perform the equivalent of all human thought since the dawn of our species in a tiny fraction of a second!

As you can see, once quantum computing technology is mastered, the amount of calculations possible will be larger than life.

Elementary quantum computers are in use in laboratories worldwide today. Although a practical workplace option is still somewhat off in the future, today’s basic machines require only sounder fundamental system architectural design to emerge as commonplace future computing technology.

The Future

Despite the discoveries that have been made in the manipulation of atomic particles with micro sized computing systems, the reality of quantum computing is still quite limited. There are still huge challenges to overcome. As Dr. Kaku relates, “When atoms are coherent and vibrating in phase with one another, the tiniest disturbances from the outside world can ruin this delicate balance and make the atoms decohere, so they no longer vibrate in unison. Even the passing of a cosmic ray or the rumble of a truck outside the lab can destroy this balance”. Unfortunately, when atoms decohere it is impossible to make any calculations. Additionally, the very nature of uncertainty on the quantum level gives rise to computational challenges. It turns out that all calculations done on a quantum computer are uncertain, so you have to repeat the experiment many times. So 2 + 2 = 4, but only sometimes. If you repeat the calculation of 2 + 2 a number of times, the final answer averages out to 4. So even arithmetic becomes fuzzy on a quantum computer

Dr. Kaku concludes with, “The decoherence problem and uncertainty issues are the most difficult barriers to creating quantum computers. Anyone who can solve these challenges will not only win a Nobel Prize but also become the richest person on earth.”

The future awaits!