Archive

Archive for the ‘Science’ Category

Happy Birthday Ben

January 17, 2015 Leave a comment

Ben Franklin‘s birthday is January 17. Benjamin Franklin (1706-1790) was a signer of both the American Declaration of Independence and the United States Constitution, the first American Ambassador to France, and a scientist, inventor, writer, author, and printer.

His Poor Richard’s Almanac was a best seller during its publication (1732 – 1758) and stands to this day as an icon of American literary invention. He used the Almanac as a vehicle for many well known sayings. Among the most famous:

Well done is better than well said.

Restoring Order in the Brain

March 13, 2014 Leave a comment

(Reprint from Neuroscience News)

Alzheimer’s disease is the most widespread degenerative neurological disorder in the world. Over five million Americans live with it, and one in three senior citizens will die with the disease or a similar form of dementia. While memory loss is a common symptom of Alzheimer’s, other behavioral manifestations — depression, loss of inhibition, delusions, agitation, anxiety, and aggression — can be even more challenging for victims and their families to live with.

Now Prof. Daniel Offen and Dr. Adi Shruster of Tel Aviv University’s Sackler School of Medicine have discovered that by reestablishing a population of new cells in the part of the brain associated with behavior, some symptoms of Alzheimer’s disease significantly decreased or were reversed altogether.

The research, published in the journal Behavioural Brain Research, was conducted on mouse models; it provides a promising target for Alzheimer’s symptoms in human beings as well.

“Until 15 years ago, the common belief was that you were born with a finite number of neurons. You would lose them as you aged or as the result of injury or disease,” said Prof. Offen, who also serves as Chief Scientific Officer at BrainStorm, a biotech company at the forefront of innovative stem cell research. “We now know that stem cells can be used to regenerate areas of the brain.”

Restoring Order in the Brain

Researchers discovered that by reestablishing a population of new cells in the part of the brain associated with behavior, some symptoms of Alzheimer’s disease significantly decreased or were reversed altogether. Credit ADEAR/NIA

Speeding up recovery

After introducing stem cells in brain tissue in the laboratory and seeing promising results, Prof. Offen leveraged the study to mice with Alzheimer’s disease-like symptoms. The gene (Wnt3a) was introduced in the part of the mouse brain that controls behavior, specifically fear and anxiety, in the hope that it would contribute to the formation of genes that produce new brain cells.

According to Prof. Offen, untreated Alzheimer’s mice would run heedlessly into an unfamiliar and dangerous area of their habitats instead of assessing potential threats, as healthy mice do. Once treated with the gene that increased new neuron population, however, the mice reverted to assessing their new surroundings first, as usual.

“Normal mice will recognize the danger and avoid it. Mice with the disease, just like human patients, lose their sense of space and reality,” said Prof. Offen. “We first succeeded in showing that new neuronal cells were produced in the areas injected with the gene. Then we succeeded in showing diminished symptoms as a result of this neuron repopulation.”

“The loss of inhibition is a cause of great embarrassment for most patients and relatives of patients with Alzheimer’s,” said Prof. Offen. “Often, patients take off their pants in public, having no sense of their surroundings. We saw parallel behavior in animal models with Alzheimer’s.”

Next: Memory

After concluding that increased stem cell production in a certain area of the brain had a positive effect on behavioral deficits of Alzheimer’s, Prof. Offen has moved to research into the area of the brain that controls memory. He and his team are currently exploring it in the laboratory and are confident that the results of the new study will be similar.

“Although there are many questions to answer before this research produces practical therapies, we are very optimistic about the results and feel this is a promising direction for Alzheimer’s research,” said Prof. Offen.

Need a New Heart, We Can Print It!

February 12, 2014 Leave a comment

New Advance in 3-D Printing and Tissue Engineering Technology

 Date: February 10, 2014

Source: Brigham and Women’s Hospital

Summary: Researchers have introduced a unique micro-robotic technique to assemble the components of complex materials, the foundation of tissue engineering and 3-D printing.

Described in the Jan. 28, 2014, issue of Nature Communications, the research was conducted by Savas Tasoglu, PhD, MS, research fellow in the BWH Division of Renal Medicine, and Utkan Demirci, PhD, MS, associate professor of Medicine in the Division of Biomedical Engineering, part of the BWH Department of Medicine, in collaboration with Eric Diller, PhD, MS, and Metin Sitti, PhD, MS, professor in the Department of Mechanical Engineering, Carnegie Mellon University.

Tissue engineering and 3D printing have become vitally important to the future of medicine for many reasons. The shortage of available organs for transplantation, for example, leaves many patients on lengthy waiting lists for life-saving treatment. Being able to engineer organs using a patient’s own cells can not only alleviate this shortage, but also address issues related to rejection of donated organs. Developing therapies and testing drugs using current preclinical models have limitations in reliability and predictability. Tissue engineering provides a more practical means for researchers to study cell behavior, such as cancer cell resistance to therapy, and test new drugs or combinations of drugs to treat many diseases.

The presented approach uses untethered magnetic micro-robotic coding for precise construction of individual cell-encapsulating hydrogels (such as cell blocks). The micro-robot, which is remotely controlled by magnetic fields, can move one hydrogel at a time to build structures. This is critical in tissue engineering, as human tissue architecture is complex, with different types of cells at various levels and locations. When building these structures, the location of the cells is significant in that it will impact how the structure will ultimately function. “Compared with earlier techniques, this technology enables true control over bottom-up tissue engineering,” explains Tasoglu.

Tasoglu and Demirci also demonstrated that micro-robotic construction of cell-encapsulating hydrogels can be performed without affecting cell vitality and proliferation. Further benefits may be realized by using numerous micro-robots together in bioprinting, the creation of a design that can be utilized by a bioprinter to generate tissue and other complex materials in the laboratory environment.

“Our work will revolutionize three-dimensional precise assembly of complex and heterogeneous tissue engineering building blocks and serve to improve complexity and understanding of tissue engineering systems,” said Metin Sitti, professor of Mechanical Engineering and the Robotics Institute and head of CMU’s NanoRobotics Lab.

“We are really just beginning to explore the many possibilities in using this micro-robotic technique to manipulate individual cells or cell-encapsulating building blocks.” says Demirci. “This is a very exciting and rapidly evolving field that holds a lot of promise in medicine.”

___________________________

Brigham and Women’s Hospital. “New advance in 3-D printing and tissue engineering technology.” ScienceDaily. ScienceDaily, 10 February 2014.

NanoTechnology, NanoBots, and Computer Manufacturing

January 24, 2012 Leave a comment


In the early 1990’s, BASF televised a series of commercials touting their ability to make products stronger, brighter, better. [BASF Commercial on YouTube] BASF claimed this was made possible due to BASF chemical engineering. It turns out this was only partly accurate. In fact, BASF accomplished these feats by ahead-of-their-time product engineering incorporating early implementations of consumer product nanotechnology.

Nanotechnology provides for functional capability in the realm of atoms and contributes intrinsically to the strength, durability, and functional characteristics of molecular structures.

To understand the world of nanotechnology one has to come to an understanding of the units of measurement involved. One centimeter is one-hundredth of a meter, a millimeter is one-thousands of a meter and a micrometer is one millionth of a meter. As small as some of these measurements may seem, they are huge when compared to the nanoscale. A nanometer (nm) is one billionth of a meter which is even smaller than the wavelength of visible light and a hundred-thousandth the width of a human hair.

Wikipedia says this regarding nanotechnology: “Nanotechnology (sometimes shortened to  “nanotech”) is the study of manipulating matter on an atomic and molecular scale. Generally, nanotechnology deals with developing materials, devices, or other structures possessing at least one dimension sized from 1 to 100 nanometers. Quantum mechanical effects are important at this quantum-realm scale.

Nanotechnology is very diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to investigating whether we can directly control matter on the atomic scale. Nanotechnology entails the application of fields of science as diverse as surface science, organic chemistry, molecular biology, semiconductor physics, microfabrication, etc.

There is much debate on the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in medicine, electronics, biomaterials and energy production. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.”

It appears the field of nanotechnology promises to deliver as much for future product development excitement as it does for concerns about its possibilities and uses.

Graphene

Graphene is an allotrope of carbon, whose structure is exactly one-atom-thick planar sheets of bonded carbon atoms that are densely packed in a honeycomb crystal lattice. Graphene is most easily visualized as an atomic-scale chicken wire made of carbon atoms.

What graphene nanotechnology can do is that it can replace the silicon transistors which are now in your computer with transistors which are based on nanotubes. It has been discovered that carbon nanotubes can be used to produce smaller and faster components. The idea is that if the silicon in the channel is exchanged with a carbon nanotube then the transistors can be made smaller and faster. By its very nature, graphene contributes a perfect foundational structure for the construction of these nanotubes.

Most recently, nano-physicists in Copenhagen, Denmark have made a discovery which can change the way data is stored on computers. Using graphene slices as “nanotubes”, they have discovered that by placing nanotubes between magnetic electrodes the direction of a single electron spin caught on the nanotube can be controlled by an electric potential. Called “Spintronics”, this new development has already been hailed as the breakthrough sought to re-define the manner with which information is stored, manipulated, and retrieved in future computing devices.

This new discovery will make it possible to combine electricity and magnetism in a new transistor concept. In their experiments the nano-physicists use carbon nanotubes as transistors. This new nanoscale structure will speed up computers, exponentially.

Perhaps the most thrilling future possibility nanotechnology is the creation of the nanobot, a still-hypothetical molecular robot. These nanobots have several key properties. First, they can reproduce themselves. If they can reproduce once, then they can, in principle, create an unlimited number of copies of themselves; it will simply take creating the first. Second, they are capable of identifying molecules and cutting them up at precise points. Third, by following a master code, they are capable of reassembling these atoms into different arrangements. Once constructed, nanobots will provide a means for true automation of manufacturing processes. What will begin with the fabrication and manipulation of molecules will evolve into the replication of larger and larger organic and non-organic systems. Ultimately, nanobots will become the basis of most, if not all, product (including computing devices) manufacturing.

Nanobots do not exist now, and will not until sometime in the future, but once the first nanobot is successfully produced, it will most certainly and fundamentally alter society as we know it.

Whether the impact to society is minimal or substantial, there is no question that nanobot technology will completely transform all types of manufacturing including the manner in which computing devices are designed and built.

Moore’s Law

Moore’s law describes a long-term trend in the history of computing hardware. Simply stated, Moore’s Law asserts that the number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years. The capabilities of many digital electronic devices are strongly linked to Moore’s law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. An inexplicable effect of the law is that all of these are improving at (roughly) exponential rates as well.This exponential improvement has dramatically enhanced the impact of digital electronics in nearly every segment of the world economy. Accordingly, Moore’s law describes a driving force of technological and social change at play in the global economy in the late 20th and early 21st centuries. Though there are those who see the effect of Moore’s Law as having negative consequences for various segments of the world’s environment, the positive effects of the technological, medical, and social engineering breakthroughs resulting from these advancements are legendary.But, we are closing in on the end of this technological watershed. The engineering sciences are in complete agreement: by the year 2020 (some predict as early as 2015), digital manufacturing will reach the point at which further miniaturization of transistors on integrated circuits becomes impossible. Capacity will be reached and exceeded.

Nanotechnology to the rescue! As renowned physicist Richard Feynman suggested in a speech he gave way back in 1959, “there’s plenty of room at the bottom”. Nanotechnology engineering experiments occurring today will take miniaturization to a scale thousands of times smaller than what is currently possible.As the invention of the transistor ushered in our current digital age, nanotechnology will bring about dramatic and far-reaching changes – changes to our technology, our products, and to the way we live.

Truth or Dare

January 10, 2012 Leave a comment

Stephen Hawking at 70

January 7, 2012 Leave a comment

What does he think about all day? (Image: Science Museum/Sarah Lee)

When he was diagnosed with motor neurone disease aged just 21, Stephen Hawking was only expected to live a few years. He will be 70 this month, and in an exclusive interview with New Scientist he looks back on his life and work

 Read more: Hawking highlights

STEPHEN HAWKING is one of the world’s greatest physicists, famous for his work on black holes. His condition means that he can now only communicate by twitching his cheek (see “The man who saves Stephen Hawking’s voice“). His responses to the questions are followed by our own (New Science, “NS”) elaboration of the concepts he describes.

 What has been the most exciting development in physics during the course of your career?

COBE’s discovery of tiny variations in the temperature of the cosmic microwave background and the subsequent confirmation by WMAP that these are in excellent agreement with the predictions of inflation. The Planck satellite may detect the imprint of the gravitational waves predicted by inflation. This would be quantum gravity written across the sky.

New Scientist writes: The COBE and WMAP satellites measured the cosmic microwave background (CMB), the afterglow of the big bang that pervades all of space. Its temperature is almost completely uniform – a big boost to the theory of inflation, which predicts that the universe underwent a period of breakneck expansion shortly after the big bang that would have evened out its wrinkles.

If inflation did happen, it should have sent ripples through space-time – gravitational waves – that would cause variations in the CMB too subtle to have been spotted so far. The Planck satellite, the European Space Agency’s mission to study the CMB even more precisely, could well see them.

Einstein referred to the cosmological constant as his “biggest blunder”. What was yours?

I used to think that information was destroyed in black holes. But the AdS/CFT correspondence led me to change my mind. This was my biggest blunder, or at least my biggest blunder in science.

NS: Black holes consume everything, including information that strays too close. But in 1975, together with the Israeli physicist Jakob Bekenstein, Hawking showed that black holes slowly emit radiation, causing them to evaporate and eventually disappear. So what happens to the information they swallow? Hawking argued for decades that it was destroyed – a major challenge to ideas of continuity, and cause and effect. In 1997, however, theorist Juan Maldacena developed a mathematical shortcut, the “Anti-de-Sitter/conformal field theory correspondence”, or AdS/CFT. This links events within a contorted space-time geometry, such as in a black hole, with simpler physics at that space’s boundary.

In 2004, Hawking used this to show how a black hole’s information leaks back into our universe through quantum-mechanical perturbations at its boundary, or event horizon. The recantation cost Hawking a bet made with fellow theorist John Preskill a decade earlier.

What discovery would do most to revolutionize our understanding of the universe?

The discovery of supersymmetric partners for the known fundamental particles, perhaps at the Large Hadron Collider. This would be strong evidence in favour of M-theory.

NS: The search for supersymmetric particles is a major goal of the LHC at CERN. The standard model of particle physics would be completed by finding the Higgs boson, but has a number of problems that would be solved if all known elementary particles had a heavier “superpartner”. Evidence of supersymmetry would support M-theory, the 11-dimensional version of string theory that is the best stab so far at a “theory of everything“, uniting gravity with the other forces of nature.

If you were a young physicist just starting out today, what would you study?

I would have a new idea that would open up a new field.

What do you think most about during the day?

Women. They are a complete mystery.

_____________________________________________________________

To mark Hawking’s birthday, the Centre for Theoretical Cosmology, University of Cambridge, is hosting a symposium entitled “The State of the Universe” on 8 January (watch live at ctc.cam.ac.uk/hawking70/multimedia.html). An exhibition of his life and work opens at the Science Museum, London, on 20 January

Variable Dark Energy Could Explain Old Galaxy Clusters

January 6, 2012 Leave a comment
[Re-Print Alert: Original Here]

by Ken Croswell and Maggie McKee 

Does dark energy change over time? An alternative model of the as yet undetected entity that is thought to be accelerating the universe’s expansion could explain some puzzling observations of galaxy clusters. But it will have to jump many more hurdles to compete with the simplest and so far most successful model of the elusive entity.

That model, called the cosmological constant, holds that there is a certain amount of repulsive energy in every cubic centimeter of space, and that amount stays the same over time. As the universe expands, more space exists, and so the expansion accelerates.

Now Edoardo Carlesi of the Autonomous University in Madrid, Spain, and his colleagues have simulated a universe where the amount of repulsive energy per unit of volume changes with time.

They say the model can explain how several galaxy clusters grew to weigh as much as a quadrillion (1015) suns by the time the universe was just 6 billion years old. That’s a puzzle because some researchers say 6 billion years would not have been enough time for gravity to amass such large structures.

 

Standard recipe

The puzzle arises if the standard “recipe” for the universe is used. The ingredients for that recipe are a large amount of dark energy, in the form of a cosmological constant, and a dollop of matter. Their ratio has been calculated by studying the cosmic microwave background, radiation that reveals the distribution of matter and energy in the early universe.

Looking at the cosmic microwave background data through the lens of a different dark energy model can produce different ratios of ingredients. The cosmological constant model allows for matter to make up 27 per cent of the universe’s energy density, whereas the dark energy model studied by Carlesi’s team provides a more generous helping: 39 per cent.

Massive clusters can form up to 10 times as often using this recipe, the researchers say. “You can explain current observations within a model that allows much more matter,” says Carlesi. As a result, galaxies attract other galaxies through their gravitational pull, so massive clusters form faster.

First hurdle

The cluster problem may not even be a problem, though, says Dragan Huterer at the University of Michigan in Ann Arbor. He says the jury is still out on whether the clusters challenge the leading cosmological model, because there is a lot of uncertainty about their mass, most of which is thought to be tied up in invisible dark matter.

The cosmological constant has so far been able to explain a wide range of observations, so turning to a relatively unproven model to account for a few galaxy clusters that may be heavier than expected “is like using a huge hammer to kill a tiny fly”, he says.

Carlesi says this is just the first test of the model, and Cristian Armendáriz-Picón at Syracuse University in New York agrees. He says the model Carlesi is using should undergo further tests that the cosmological constant has already passed. For example, its effects should be consistent with the integrated Sachs-Wolfe effect, in which photons from the cosmic microwave background experience slight changes in wavelength as they feel the gravity of superclusters of galaxies they pass through.