Tuesday, July 22, 2014

Giant laser recreates extreme conditions inside planets


Researchers can now re-create and accurately measure material properties that control how these planets evolve over time, information essential for understanding how these massive objects form. This study focused on carbon, the fourth most abundant element in the cosmos (after hydrogen, helium and oxygen), which has an important role in many types of planets within and outside our solar system. The research appears in the July 17 edition of the journal, Nature.
Using the largest laser in the world, the National Ignition Facility at Lawrence Livermore National Laboratory, teams from the Laboratory, University of California, Berkeley and Princeton University squeezed samples to 50 million times Earth's atmospheric pressure, which is comparable to the pressures at the center of Jupiter and Saturn. Of the 192 lasers at NIF, the team used 176 with exquisitely shaped energy versus time to produce a pressure wave that compressed the material for a short period of time. The sample — diamond — is vaporized in less than 10 billionths of a second. 
Though diamond is the least compressible material known, the researchers were able to compress it to an unprecedented density greater than lead at ambient conditions.
"The experimental techniques developed here provide a new capability to experimentally reproduce pressure-temperature conditions deep in planetary interiors," said Ray Smith, LLNL physicist and lead author of the paper. Such pressures have been reached before, but only with shock waves that also create high temperatures — hundreds of thousands of degrees or more — that are not realistic for planetary interiors. The technical challenge was keeping temperatures low enough to be relevant to planets. The problem is similar to moving a plow slowly enough to push sand forward without building it up in height. This was accomplished by carefully tuning the rate at which the laser intensity changes with time.
"This new ability to explore matter at atomic scale pressures, where extrapolations of earlier shock and static data become unreliable, provides new constraints for dense matter theories and planet evolution models," said Rip Collins, another Lawrence Livermore physicist on the team.
The data described in this work are among the first tests for predictions made in the early days of quantum mechanics, more than 80 years ago, which are routinely used to describe matter at the center of planets and stars. While agreement between these new data and theory are good, there are important differences discovered, suggesting potential hidden treasures in the properties of diamond compressed to such extremes. Future experiments on NIF are focused on further unlocking these mysteries.


Monday, July 21, 2014

Solar panels light the way from carbon dioxide to fuel

Solar_panels_800Research to curb global warming caused by rising levels of atmospheric greenhouse gases, such as carbon dioxide, usually involves three areas: Developing alternative energy sources, capturing and storing greenhouse gases, and repurposing excess greenhouse gases. Drawing on two of these approaches, researchers in the laboratory of Andrew Bocarsly, a Princeton professor of chemistry, collaborated with researchers at start-up company Liquid Light Inc. of Monmouth Junction, New Jersey, to devise an efficient method for harnessing sunlight to convert carbon dioxide into a potential alternative fuel known as formic acid. The study was published June 13 in the Journal of CO2 Utilization.
Pictured with the photovoltaic-electrochemical cell system from left to right: Graduate student James White (Princeton), Professor Andrew Bocarsly (Princeton and Liquid Light) and principal engineer Paul Majsztrik (Liquid Light). (Photo by Frank Wojciechowski)
Pictured with the photovoltaic-electrochemical cell system from left to right: Graduate student James White (Princeton), Professor Andrew Bocarsly (Princeton and Liquid Light) and principal engineer Paul Majsztrik (Liquid Light). (Photo by Frank Wojciechowski)
The transformation from carbon dioxide and water to formic acid was powered by a commercial solar panel provided by the energy company PSE&G that can be found atop electric poles across New Jersey. The process takes place inside an electrochemical cell, which consists of metal plates the size of rectangular lunch-boxes that enclose liquid-carrying channels.
To maximize the efficiency of the system, the amount of power produced by the solar panel must match the amount of power the electrochemical cell can handle, said Bocarsly. This optimization process is called impedance matching. By stacking three electrochemical cells together, the research team was able to reach almost 2 percent energy efficiency, which is twice the efficiency of natural photosynthesis. It is also the best energy efficiency reported to date using a man-made device.
A number of energy companies are interested in storing solar energy as formic acid in fuel cells. Additionally, formate salt—readily made from formic acid—is the preferred de-icing agent on airplane runways because it is less corrosive to planes and safer for the environment than chloride salts. With increased availability, formate salts could supplant more harmful salts in widespread use.
Using waste carbon dioxide and easily obtained machined parts, this approach offers a promising route to a renewable fuel, Bocarsly said.
This work was financially supported by Liquid Light, Inc., which was cofounded by Bocarsly, and the National Science Foundation under grant no. CHE-0911114.
White, J. L.; Herb, J. T.; Kaczur, J. J.; Majsztrik, P. W.; Bocarsly, A. B. Photons to formate: Efficient electrochemical solar energy conversion via reduction of carbon dioxide. Journal of CO2 Utilization. Available online June 13, 2014.

Friday, July 18, 2014

Quantum bounce could make black holes explode

If space-time is granular, it could reverse gravitational collapse and turn it into expansion.
A. Corichi/J.P. Ruiz
The collapse of a star into a black hole could be a temporary effect that leads to the formation of a 'white hole', suggests a new model based on a theory known as loop quantum gravity.
Black holes might end their lives by transforming into their exact opposite — 'white holes' that explosively pour all the material they ever swallowed into space, say two physicists. The suggestion, based on a speculative quantum theory of gravity, could solve a long-standing conundrum about whether black holes destroy information.
The theory suggests that the transition from black hole to white hole would take place right after the initial formation of the black hole, but because gravity dilates time, outside observers would see the black hole lasting billions or trillions of years or more, depending on its size. If the authors are correct, tiny black holes that formed during the very early history of the Universe would now be ready to pop off like firecrackers and might be detected as high-energy cosmic rays or other radiation. In fact, they say, their work could imply that some of the dramatic flares commonly considered to be supernova explosions could in fact be the dying throes of tiny black holes that formed shortly after the Big Bang.
Albert Einstein’s general theory of relativity predicts that when a dying star collapses under its own weight, it can reach a stage at which the collapse is irreversible and no known force of nature can stop it. This is the formation of a black hole: a spherical surface, known as the event horizon, appears, shrouding the star inside from outside observers while it continues to collapse, because nothing — not even light or any other sort of information — can escape the event horizon.
Because dense matter curves space, ‘classical’ general relativity predicts that the star inside will continue to shrink into what is known as a singularity, a region where matter is infinitely dense and space is infinitely curved. In such situations, the known laws of physics cease to be useful.
Many physicists, however, believe that at some stage in this process, quantum-gravity effects should take over, arresting the collapse and avoiding the infinities.
In a loop
One of the leading approaches to merging quantum theory and gravity, pioneered by, among others, theoretical physicist Carlo Rovelli of Aix-Marseille University in France, posits that it is not just gravity but space-time itself that is quantized, woven from tiny, individual loops that cannot be subdivided any further. The loops in this ‘loop quantum gravity’ — a theoretical attempt that has yet to find experimental support — would be so tiny that to any observer space-time looks smooth and continuous. In the new work1, Rovelli and his Aix-Marseille colleague Hal Haggard have calculated that the loop structure would halt the collapse of a black hole.
The collapsing star would reach a stage at which its inside can shrink no further, because the loops cannot be compressed into anything smaller, and in fact they would exert an outward pressure that theorists call a quantum bounce, transforming a black hole into a white hole. Rather than being shrouded by a true, eternal event horizon, the event would be concealed by a temporary 'apparent horizon', says Rovelli. (Theoretical physicist Stephen Hawking of the University of Cambridge, UK, has recently suggested that true event horizons would be incompatible with quantum physics.)
Other loop-quantum theorists have made similar calculations for cases in which it is not just a star that is collapsing but an entire universe23. They found that the universe could bounce back, and suggested that our own Universe’s Big Bang could in fact have been such a ‘big bounce’. Rovelli and Haggard have now shown that the quantum bounce does not require an entire universe to collapse at once. “We think this is a possible picture,” says Rovelli. “We have found that the [transformation] process can be completely contained in a limited region of space-time. Everything outside behaves following the classical Einstein equations.”

Information paradox

If black holes turn into white holes and release all of their innards out again, it could provide a solution to one of the most troublesome questions of fundamental physics. Hawking calculated in the 1970s that a black hole should emit radiation out of its event horizon, slowly losing energy and shrinking in the process until it completely disappears. This 'Hawking radiation' means that information carried by the matter that fell into the black hole would then seem to vanish forever. This would violate one of the fundamental principles of quantum theory, according to which information cannot be destroyed. 
If the new work sheds any light on this black-hole information paradox, “it would be important”, says theoretical physicist Steven Giddings of the University of California, Santa Barbara. “Understanding how information escapes from a black hole is the key question for the quantum mechanics of black holes, and possibly for quantum gravity itself.”
The authors acknowledge that some of the conclusions in their paper have yet to be fleshed out with detailed calculations. Other physicists, including Joseph Polchinski of the University of California, Santa Barbara, also worry that the scenario involves quantum effects that are unrealistically large.
Theoretical physicist Donald Marolf of the University of California, Santa Barbara, cautions that the quantum bounce could violate one of the most fundamental principles of physics: that entropy, a measure of the amount of disorder in a system, can increase but can never decrease. He says that the outgoing material from the white hole, initially packed into a small region, would seem to have a smaller entropy than the black hole itself. Rovelli and Haggard maintain that in their scenario entropy would not decrease.
Nonetheless, the work puts the idea of a quantum bounce on a surer footing, says Abhay Ashtekar of Pennsylvania State University in University Park, another one of the founders of loop quantum gravity. But he says that he would like to see more detailed calculations before he is convinced.

All in the timing

Rovelli notes that he and Haggard must calculate more carefully how much time it takes for the black hole to transform into a white hole. Their current, rough estimate — a few thousandths of a second — is crucial to pin down, because the intense gravitational field of a black hole stretches light waves and dilates time, so that an outside observer would see the transformation occur over a much longer time.
If the time, as seen by an outside observer, were too short, then all the black holes that ever formed ought to have exploded and vanished, contradicting astrophysical observations. On the other hand, if the observed time were too long, the transformation to white hole would be inconsequential because black holes would already have fizzled out owing to Hawking radiation. The team calculates that for a black hole the mass of the Sun, it would take about a thousand trillion times the current age of the Universe to convert into a white hole.
In a recent paper4, Giddings proposes that information may escape black holes in a less explosive fashion, made possible by the grainy quantum structure of space-time. This would cause fluctuations in the geometry of the region just outside the black hole that could be detectable by the future Event Horizon Telescope, a global network of radio telescopes, when it studies the pattern of light surrounding Sagittarius A*, the supermassive black hole at the centre of our Galaxy.
Nature
 
doi:10.1038/nature.2014.15573

References

  1. Rovelli, C. & Haggard, H. M. available at http://arxiv.org/abs/1407.0989 (2014).
    Show context
  2. Bojowald, M. Nature Phys. 3523525 (2007).
    Show context
  3. Agullo, I.Ashtekar, A. & Nelson, W. Phys. Rev. Lett. 109, 251301 (2012).
    Show context
  4. Giddings, S. B. available at http://arxiv.org/abs/1406.7001 (2014).
    Show context

Strange physics turns off laser (Nature Communications)

An electron microscope image shows two lasers placed just two microns apart from each other. (Image source: Turecki lab)
An electron microscope image shows two lasers placed just two microns apart from each other. (Image source: Turecki lab)
Inspired by anomalies that arise in certain mathematical equations, researchers have demonstrated a laser system that paradoxically turns off when more power is added rather than becoming continuously brighter.
The finding by a team of researchers at Vienna University of Technology and Princeton University, could lead to new ways to manipulate the interaction of electronics and light, an important tool in modern communications networks and high-speed information processing.
The researchers published their results June 13 in the journal Nature Communications.
Their system involves two tiny lasers, each one-tenth of a millimeter in diameter, or about the width of a human hair. The two are nearly touching, separated by a distance 50 times smaller than the lasers themselves. One is pumped with electric current until it starts to emit light, as is normal for lasers. Power is then added slowly to the other, but instead of it also turning on and emitting even more light, the whole system shuts off.
“This is not the normal interference that we know,” said Hakan Türeci, assistant professor ofelectrical engineering at Princeton, referring to the common phenomenon of light waves or sound waves from two sources cancelling each other.  Instead, he said, the cancellation arises from the careful distribution of energy loss within an overall system that is being amplified.
Interactions between two lasers
Manipulating minute areas of gain and loss within individual lasers (shown as peaks and valleys in the image), researchers were able to create paradoxical interactions between two nearby lasers.(Image source: Turecki lab)
“Loss is something you normally are trying to avoid,” Türeci said. “In this case, we take advantage of it and it gives us a different dimension we can use – a new tool – in controlling optical systems.”
The research grows out of Türeci’s longstanding work on mathematical models that describe the behavior of lasers. In 2008, he established a mathematical framework for understanding the unique properties and complex interactions that are possible in extremely small lasers – devices with features measured in micrometers or nanometers. Different from conventional desk-top lasers, these devices fit on a computer chip.
That work opened the door to manipulating gain or loss (the amplification or loss of an energy input) within a laser system. In particular, it allowed researchers to judiciously control the spatial distribution of gain and loss within a single system, with one tiny sub-area amplifying light and an immediately adjacent area absorbing the generated light.
Türeci and his collaborators are now using similar ideas to pursue counterintuitive ideas for using distribution of gain and loss to make micro-lasers more efficient.
The researchers’ ideas for taking advantage of loss derive from their study of mathematical constructs called “non-Hermitian” matrices in which a normally symmetric table of values becomes asymmetric. Türeci said the work is related to certain ideas of quantum physics in which the fundamental symmetries of time and space in nature can break down even though the equations used to describe the system continue to maintain perfect symmetry.
Over the past several years, Türeci and his collaborators at Vienna worked to show how the mathematical anomalies at the heart of this work, called “exceptional points,” could be manifested in an actual system. In 2012 (Ref. 3), the team published a paper in the journal Physical Review Letters demonstrating computer simulations of a laser system that shuts off as energy is being added. In the current Nature Communications paper, the researchers created an experimental realization of their theory using a light source known as a quantum cascade laser.
The researchers report in the article that results could be of particular value in creating “lab-on-a-chip” devices – instruments that pack tiny optical devices onto a single computer chip. Understanding how multiple optical devices interact could provide ways to manipulate their performance electronically in previously unforeseen ways. Taking advantage of the way loss and gain are distributed within tightly coupled laser systems could lead to new types of highly accurate sensors, the researchers said.
“Our approach provides a whole new set of levers to create unforeseen and useful behaviors,” Türeci said.
The work at Vienna, including creation and demonstration of the actual device, was led byStefan Rotter at Vienna along with Martin Brandstetter, Matthias Liertzer, C. Deutsch, P. Klang, J. Schöberl, G. Strasser and K. Unterrainer. Türeci participated in the development of the mathematical models underlying the phenomena. The work on the 2012 computer simulation of the system also included Li Ge, who was a post-doctoral researcher at Princeton at the time and is now an assistant professor at City University of New York.
The work was funded by the Vienna Science and Technology Fund and the Austrian Science Fund, as well as by the National Science Foundation through a major grant for the Mid-Infrared Technologies for Health and the Environment Center based at Princeton and by the Defense Advanced Research Projects Agency.
M. Brandstetter, M. Liertzer, C. Deutsch,P. Klang,J. Schöberl,H. E. Türeci,G. Strasser,K. Unterrainer & S. Rotter. Reversing the pump dependence of a laser at an exceptional point. Nature Communications 13 June 2014. DOI:10.1038/ncomms5034
Science 2 May 2008. DOI: 10.1126/science.1155311
Physical Review Letters 24 April 2012. DOI:10.1103/PhysRevLett.108.173901



Study shows significant increase in antibiotic use across the world

Global use of antibiotics is surging, according to Princeton University researchers who have conducted a broad assessment of antibiotic consumption around the world.
The study, "Global Trends in Antibiotic Consumption, 2000-2010," found that worldwide antibiotic use has risen a staggering 36 percent over those 10 years, with five countries — Brazil, Russia, India, China and South Africa (BRICS) — responsible for more than three-quarters of that surge, according to study authors Thomas Van BoeckelSimon LevinBryan GrenfellRamanan Laxminarayan and Quentin Caudron of Princeton. 
Among the 16 groups of antibiotics studied, cephalosporins, broad-spectrum penicillins and fluoroquinolones accounted for more than half of that increase, with consumption rising 55 percent from 2000 to 2010. 
The study quantifies the growing alarm surrounding antibiotic-resistant pathogens, and a loss of efficacy among antibiotics used to combat the most common illnesses. In addition, the report highlights an increasing resistance to carbapenems and polymixins, two classes of drugs long considered "last resort" antibiotics for illnesses without any other known treatment. 
Overall, the study reviewed patterns, seasonality and frequency of use of antibiotics in 71 countries. The findings of the report are featured July 10 in the journal The Lancet Infectious Diseases. 
There was some good news. The data underscore the welcome evidence that more global citizens are able to access and purchase antibiotics. But that use is not being effectively monitored by health officials, from doctors to hospital workers to clinicians, noted the researchers. Consequently, antibiotic use is both rampant and less targeted. 
That reality is driving antibiotic resistance at an unprecedented rate.
"We have to remember that before we had antibiotics, it was pretty easy to die of a bacterial infection," said Laxminarayan, a research scholar with the Princeton Environmental Institute. "And we're choosing to go back into a world where you won't necessarily get better from a bacterial infection. It's not happening at a mass scale, but we're starting to see the beginning of when the antibiotics are not working as well."
The study found that India was the single-largest consumer of antibiotics in the world in 2010, followed by China and the United States. 
The study also found that antibiotic consumption has flattened in the United States, compared with the five BRICS countries. But U.S. citizens per capita still account for far more antibiotic consumption than any other population, with a rate of more than twice that of India.
"This paper breaks new ground with the comparative antibiotic consumption data by country of the first decade of the 21st century," said Professor Dame Sally Davies, chief medical officer for England and chief scientific adviser for the Department of Health, London. "There is a direct relationship between consumption and development of antibiotic resistance, so the data is key for us all developing 'National Action Plans Against Antimicrobial Resistance' as set out in the World Health Assembly Resolution in May."
The study noted that antibiotic use tended to peak at different times of the year, corresponding in almost every case with the onset of the flu season. In the northern hemisphere, for example, consumption peaked between January and March, while in the southern hemisphere it peaked between July and November. One notable exception was India, for which usage peaked between July and September, correlating with the end of the monsoon season.
"This is a problem at the scale of climate change in terms of urgency," said Laxminarayan. "But we don't have anything close to the architecture of science to look at this problem, to look at solutions, to look at where the problem is the worst."
Laxminarayan and Levin received a grant from the University's Grand Challenges Program to study the problem of antibiotic resistance as part of general work on "common property" problems, those areas in which individuals or small groups make decisions — such as on how they use antibiotics — that have national or global consequences. 
The research was conducted by Van Boeckel, postdoctoral research fellow; Caudron, postdoctoral researcher; Grenfell, the Kathryn Briger and Sarah Fenton Professor of Ecology and Evolutionary Biology and Public Affairs at the Woodrow Wilson School; Levin, the George M. Moffett Professor of Biology and professor of ecology and evolutionary biology; and Laxminarayan. In addition, two scholars from the Center for Disease Dynamics, Economics & Policy in Washington, D.C., were involved. The data supporting the study results were drawn from IMS Health, a private company that collects sales data on global drug pharmaceuticals. 
The research group frames the issue of antibiotic resistance as more than a global health concern. Because the study found a strong correlation between seasonality and antibiotic consumption — for example during flu seasons, for which antibiotic use is "inappropriate" — Levin sees the issue as having a strong environmental component, as well.
"Keep in mind that we're not the primary organism on this planet. It's really bacteria," Levin said. "And we in no way understand bacteria to the degree that we should. Instead, we're just dumping antibiotics on them, and then we think we're winning the war on bugs. There is no winning the war on bugs. 
"We're modifying that bacteria, where we dump antibiotics into agriculture, we put them on fruit trees, we prescribe them in outpatient clinics — we have been running this massive experiment for about 50 years without really knowing what it's doing. What are the consequences of that?" 
Programs promoting "rational use" of antibiotics should be a national and global priority, said the report's authors. That process has to begin with the BRICS countries, which are experiencing the highest rates of increase in antibiotic consumption. 
While the report did not include specific recommendations, Laxminarayan suggested that a universally adopted algorithm for prescribing antibiotics might help assure their appropriate use. Such an algorithm would not require a doctor's authorization, he said. It would merely require a practitioner committed to observing it.


The research was funded through the U.S. Department of Homeland Security; the Bill & Melinda Gates Foundation; the RAPIDD Program; the National Institutes of Health Fogarty International Center; and the Princeton University Grand Challenges Program.

Sunday, July 13, 2014

Genes that influence children's reading skills also affect their maths

Study suggests that half of the genes that affect 12-year-olds' literacy also play a role in their abilities in mathematics
Many of the genes that affect how well a child can read at secondary school have an impact on their maths skills too, researchers say.
Scientists found that around half of the genes that influenced the literacy of 12-year-olds also played a role in their mathematical abilities. The findings suggest that hundreds and possibly thousands of subtle DNA changes in genes combine to help shape a child's performance in both reading and mathematics.
But while genetic factors are important, environmental influences, such as home life and schooling, contributed roughly the same amount asgenetics in the children studied, the researchers said.
"Children differ genetically in how easy or difficult they find learning, and we need to recognise, and respect, these individual differences," saidRobert Plomin, professor of behavioural genetics at Kings College London and an author on the study.
"Finding such strong genetic influence does not mean that there is nothing we can do if a child finds learning difficult. Heritability does not imply that anything is set in stone. It just means it may take more effort from parents, schools and teachers to bring the child up to speed."
In the study, 12-year old twins and unrelated children from around 2,800 British families were assessed for reading comprehension and fluency, and tested on mathematics questions from the UK national curriculum. This information was then analysed alongside the children's DNA.
Oliver Davis, a geneticist at University College London, said: "We looked at this question in two ways, by comparing the similarity of thousands of twins, and by measuring millions of tiny differences in their DNA. Both analyses show that similar collections of subtle DNA differences are important for reading and maths."
The study did not identify specific genes linked to numeracy or literacy, and researchers do not know what the various gene variants do. But they may affect brain development and function, or other biological processes that are important for learning both skills.
The findings build on previous studies showing that genetic variations among British schoolchildren explain most of the differences in how well they perform in exams.
Writing in the journal Nature Communications, the authors explain that understanding how genes affect children's abilities "increases our chances of developing effective learning environments that will help individuals attain the highest level of literacy and numeracy, increasingly important skills in the modern world".
Chris Spencer at Oxford University said: "We're moving into a world where analysing millions of DNA changes, in thousands of individuals, is a routine tool in helping scientists to understand aspects of human biology. This study used the technique to help investigate the overlap in the genetic component of reading and maths ability in children. Interestingly, the same method can be applied to pretty much any human trait, for example to identify new links between diseases, or the way in which people respond to treatments."

Making quantum connections: The speed of information in a spin network

Date:
July 9, 2014
Source:
Joint Quantum Institute, University of Maryland
Summary:
Physicists are pretty adept at controlling quantum systems and even making certain entangled states. Researchers are putting these skills to work to explore the dynamics of correlated quantum systems. Recent results investigated how information flows through a quantum many-body system.

In quantum mechanics, interactions between particles can give rise to entanglement, which is a strange type of connection that could never be described by a non-quantum, classical theory. These connections, called quantum correlations, are present in entangled systems even if the objects are not physically linked (with wires, for example). Entanglement is at the heart of what distinguishes purely quantum systems from classical ones; it is why they are potentially useful, but it sometimes makes them very difficult to understand.
Physicists are pretty adept at controlling quantum systems and even making certain entangled states. Now JQI researchers, led by theorist Alexey Gorshkov and experimentalist Christopher Monroe, are putting these skills to work to explore the dynamics of correlated quantum systems. What does it mean for objects to interact locally versus globally? How do local and global interactions translate into larger, increasingly connected networks? How fast can certain entanglement patterns form? These are the kinds of questions that the Monroe and Gorshkov teams are asking. Their recent results investigating how information flows through a quantum many-body system are published this week in the journal Nature, and in a second paper to appear in Physical Review Letters.
Researchers can engineer a rich selection of interactions in ultracold atom experiments, allowing them to explore the behavior of complex and massively intertwined quantum systems. In the experimental work from Monroe's group, physicists examined how quickly quantum connections formed in a crystal of eleven ytterbium ions confined in an electromagnetic trap. The researchers used laser beams to implement interactions between the ions. Under these conditions, the system is described by certain types of 'spin' models, which are a vital mathematical representation of numerous physical phenomena including magnetism. Here, each atomic ion has isolated internal energy levels that represent the various states of spin.
In the presence of carefully chosen laser beams the ion spins can influence their neighbors, both near and far. In fact, tuning the strength and form of this spin-spin interaction is a key feature of the design. In Monroe's lab, physicists can study different types of correlated states within a single pristine quantum environment.
To see dynamics the researchers initially prepared the ion spin system in an uncorrelated state. Next, they abruptly turned on a global spin-spin interaction. The system is effectively pushed off-balance by such a fast change and the spins react, evolving under the new conditions.The team took snapshots of the ion spins at different times and observed the speed at which quantum correlations grew.
The spin models themselves do not have an explicitly built-in limit on how fast such information can propagate. The ultimate limit, in both classical and quantum systems, is given by the speed of light. However, decades ago, physicists showed that a slower information speed limit emerges due to some types of spin-spin interactions, similar to sound propagation in mechanical systems. While the limits are better known in the case where spins predominantly influence their closest neighbors, calculating constraints on information propagation in the presence of more extended interactions remains challenging. Intuitively, the more an object interacts with other distant objects, the faster the correlations between distant regions of a network should form. Indeed, the experimental group observes that long-range interactions provide a comparative speed-up for sending information across the ion-spin crystal. In the paper appearing inPhysical Review Letters, Gorshkov's team improves existing theory to much more accurately predict the speed limits for correlation formation, in the presence of interactions ranging from nearest-neighbor to long-range.
Verifying and forming a complete understanding of quantum information propagation is certainly not the end of the story; this also has many profound implications for our understanding of quantum systems more generally. For example, the growth of entanglement, which is a form of information that must obey the bounds described above, is intimately related to the difficulty of modeling quantum systems on a computer. Dr. Michael Foss-Feig explains, "From a theorist's perspective, the experiments are cool because if you want to do something with a quantum simulator that actually pushes beyond what calculations can tell you, doing dynamics with long-range interacting systems is expected to be a pretty good way to do that. In this case, entanglement can grow to a point that our methods for calculating things about a many-body system break down."
Theorist Dr. Zhexuan Gong states that in the context of both works, "We are trying to put bounds on how fast correlation and entanglement can form in a generic many-body system. These bounds are very useful because with long-range interactions, our mathematical tools and state-of-the-art computers can hardly succeed at predicting the properties of the system. We would then need to either use these theoretical bounds or a laboratory quantum simulator to tell us what interesting properties a large and complicated network of spins possess. These bounds will also serve as a guideline on what interaction pattern one should achieve experimentally to greatly speed up information propagation and entanglement generation, both key for building a fast quantum computer or a fast quantum network."
From the experimental side, Dr. Phil Richerme gives his perspective, "We are trying to build the world's best experimental platform for evolving the Schrodinger equation [math that describes how properties of a quantum system change in time]. We have this ability to set up the system in a known state and turn the crank and let it evolve and then make measurements at the end. For system sizes not much larger than what we have here, doing this becomes impossible for a conventional computer."

Story Source:
The above story is based on materials provided by Joint Quantum Institute, University of Maryland. The original article was written by E. Edwards, JQI. Note: Materials may be edited for content and length.

Journal Reference:
  1. Philip Richerme, Zhe-Xuan Gong, Aaron Lee, Crystal Senko, Jacob Smith, Michael Foss-Feig, Spyridon Michalakis, Alexey V. Gorshkov, Christopher Monroe. Non-local propagation of correlations in quantum systems with long-range interactionsNature, 2014; 511 (7508): 198 DOI: 10.1038/nature13450


Own your own data


A new system would allow individuals to pick and choose what data to share with websites and mobile apps.


Cellphone metadata has been in the news quite a bit lately, but the National Security Agency isn’t the only organization that collects information about people’s online behavior. Newly downloaded cellphone apps routinely ask to access your location information, your address book, or other apps, and of course, websites like Amazon or Netflix track your browsing history in the interest of making personalized recommendations.
At the same time, a host of recent studies have demonstrated that it’s shockingly easy to identify unnamed individuals in supposedly “anonymized” data sets, even ones containing millions of records. So, if we want the benefits of data mining — like personalized recommendations or localized services — how can we protect our privacy?
In the latest issue of PLOS One, MIT researchers offer one possible answer. Their prototype system, openPDS — short for personal data store — stores data from your digital devices in a single location that you specify: It could be an encrypted server in the cloud, but it could also be a computer in a locked box under your desk. Any cellphone app, online service, or big-data research team that wants to use your data has to query your data store, which returns only as much information as is required.
Sharing code, not data
“The example I like to use is personalized music,” says Yves-Alexandre de Montjoye, a graduate student in media arts and sciences and first author on the new paper. “Pandora, for example, comes down to this thing that they call the music genome, which contains a summary of your musical tastes. To recommend a song, all you need is the last 10 songs you listened to — just to make sure you don’t keep recommending the same one again — and this music genome. You don’t need the list of all the songs you’ve been listening to.”
With openPDS, de Montjoye says, “You share code; you don’t share data. Instead of you sending data to Pandora, for Pandora to define what your musical preferences are, it’s Pandora sending a piece of code to you for you to define your musical preferences and send it back to them.”
De Montjoye is joined on the paper by his thesis advisor, Alex “Sandy” Pentland, the Toshiba Professor of Media Arts and Sciences; Erez Shmueli, a postdoc in Pentland’s group; and Samuel Wang, a software engineer at Foursquare who was a graduate student in the Department of Electrical Engineering and Computer Science when the research was done.
After an initial deployment involving 21 people who used openPDS to regulate access to their medical records, the researchers are now testing the system with several telecommunications companies in Italy and Denmark. Although openPDS can, in principle, run on any machine of the user’s choosing, in the trials, data is being stored in the cloud.
Meaningful permissions
One of the benefits of openPDS, de Montjoye says, is that it requires applications to specify what information they need and how it will be used. Today, he says, “when you install an application, it tells you ‘this application has access to your fine-grained GPS location,’ or it ‘has access to your SD card.’ You as a user have absolutely no way of knowing what that means. The permissions don’t tell you anything.”
In fact, applications frequently collect much more data than they really need. Service providers and application developers don’t always know in advance what data will prove most useful, so they store as much as they can against the possibility that they may want it later. It could, for instance, turn out that for some music listeners, album cover art turns out to be a better predictor of what songs they’ll like than anything captured by Pandora’s music genome.
OpenPDS preserves all that potentially useful data, but in a repository controlled by the end user, not the application developer or service provider. A developer who discovers that a previously unused bit of information is useful must request access to it from the user. If the request seems unnecessarily invasive, the user can simply deny it.
Of course, a nefarious developer could try to game the system, constructing requests that elicit more information than the user intends to disclose. A navigation application might, for instance, be authorized to identify the subway stop or parking garage nearest the user. But it shouldn’t need both pieces of information at once, and by requesting them, it could infer more detailed location information than the user wishes to reveal.
Creating safeguards against such information leaks will have to be done on a case-by-case, application-by-application basis, de Montjoye acknowledges, and at least initially, the full implications of some query combinations may not be obvious. But “even if it’s not 100 percent safe, it’s still a huge improvement over the current state,” he says. “If we manage to get people to have access to most of their data, and if we can get the overall state of the art to move from anonymization to interactive systems, that would be such a huge win.”
“OpenPDS is one of the key enabling technologies for the digital society, because it allows users to control their data and at the same time open up its potential both at the economic level and at the level of society,” says Dirk Helbing, a professor of sociology at ETH Zurich. “I don’t see another way of making big data compatible with constitutional rights and human rights.”


Drone lighting

Autonomous vehicles could automatically assume the right positions for photographic lighting.


Lighting is crucial to the art of photography. But lights are cumbersome and time-consuming to set up, and outside the studio, it can be prohibitively difficult to position them where, ideally, they ought to go.


Researchers at MIT and Cornell University hope to change that by providing photographers with squadrons of small, light-equipped autonomous robots that automatically assume the positions necessary to produce lighting effects specified through a simple, intuitive, camera-mounted interface.
At the International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging in August, they take the first step toward realizing this vision, presenting a prototype system that uses an autonomous helicopter to produce a difficult effect called “rim lighting,” in which only the edge of the photographer’s subject is strongly lit.
According to Manohar Srikanth, who worked on the system as a graduate student and postdoc at MIT and is now a senior researcher at Nokia, he and his coauthors —MIT professor of computer science and engineering Frédo Durand and Cornell’s Kavita Bala, who also did her PhD at MIT — chose rim lighting for their initial experiments precisely because it’s a difficult effect.
“It’s very sensitive to the position of the light,” Srikanth says. “If you move the light, say, by a foot, your appearance changes dramatically.”
Intuitive control
With the new system, the photographer indicates the direction from which the rim light should come, and the miniature helicopter flies to that side of the subject. The photographer then specifies the width of the rim as a percentage of its initial value, repeating that process until the desired effect is achieved.
Thereafter, the robot automatically maintains the specified rim width. “If somebody is facing you, the rim you would see is on the edge of the shoulder, but if the subject turns sideways, so that he’s looking 90 degrees away from you, then he’s exposing his chest to the light, which means that you’ll see a much thicker rim light,” Srikanth says. “So in order to compensate for the change in the body, the light has to change its position quite dramatically.”
In the same way, Srikanth says, the system can compensate for the photographer’s movements. In both cases, the camera itself supplies the control signal. Roughly 20 times a second, the camera produces an image that is not stored on its own memory card but transmitted to a computer running the researchers’ control algorithm. The algorithm evaluates the rim width and adjusts the robot’s position accordingly.
“The challenge was the manipulation of the very difficult dynamics of the UAV [unmanned aerial vehicle] and the feedback from the lighting estimation,” Durand says. “That’s where we put a lot of our efforts, to make sure that the control of the drone could work at the very high speed that’s needed just to keep the thing flying and deal with the information from the lidar [the UAV’s laser rangefinder] and the rim-lighting estimation.”
Quick study
As Srikanth explains, that required some algorithmic streamlining. “When we first started looking at it, we thought we’d come up with a very fancy algorithm that looks at the whole silhouette of the subject and tries to figure out the morphological properties, the curve of the edge, and so on and so forth, but it turns out that those calculations are really time-consuming,” Srikanth says.
Instead, the algorithm simply looks for the most dramatic gradations in light intensity across the whole image and measures their width. With a rim-lit subject, most of those measurements will congregate around the same value, which the algorithm takes to be the width of the rim.
In experiments, this quick approximation was able to keep up with the motions of both the subject and the photographer while maintaining a consistent rim width.
The researchers tested their prototype in a motion-capture studio, which uses a bank of high-speed cameras to measure the position of specially designed light-reflecting tags with millimeter accuracy; several such tags were affixed to the helicopter.
But, Srikanth explains, the purpose of the tests was to evaluate the control algorithm, which performed well. Algorithms that gauge robots’ location based only on measurements from onboard sensors are a major area of research in robotics, and the new system could work with any of them. Even rim lighting, Srikanth says, doesn’t require the millimeter accuracy of the motion-capture studio. “We only need a resolution of 2 or 3 centimeters,” he says.
“Rim lighting is a particularly interesting effect, because you want to precisely position the lighting to bring out silhouettes,” says Ravi Ramamoorthi, a professor of computer science and engineering at the the University of California, San Diego. “Other effects are in some sense easier — one doesn't need as precise positioning for frontal lighting. So the technique would probably generalize to other light effects. But at the same time, as-precise control and manipulation may not be needed. Manual static positioning might be adequate.”
“Clearly, taking the UAV system out of the lab and into the real world, and making it robust enough to be practical is a challenge,” Ramamoorthi adds, “but also something that should be doable given the rapid advancement of all of these technologies.”