Saturday, 11 April 2015

Zooming in on panoramas with your tablet

Most people are familiar with the fictional world of "Star Trek," in which the characters can use a holodeck to create and interact with virtual worlds. It is possible to recreate a similar, but milder, effect in the real world using 360-degree panoramic images. Researchers are bringing them now to our tablets -- including individual camera work and editing.

Panoramic video thrusts viewers right into the middle of the action. "Even a 180-degree panorama leaves you really feeling part of the action," says an enthusiastic Christian Weissig from the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut HHI in Berlin. While the technology has existed for a number of years now, when Germany's first IMAX cinema in Munich closed in 2010, 3D technology took over in drawing viewers into the action, and it seemed as if panoramic technology had already passed its peak. "Too expensive, not commercially viable" -- that was the brutal verdict on the technology that acts as the inspiration for the "Star Trek" holodeck. Thanks to the research done by Weissig and his team, the panorama could soon pop up where we might least expect it: on the screens of smart TVs, smartphones and tablets. "Ultra-HD-Zoom" is a prototype that allows users to select and navigate around high-resolution segments of panoramic images. The researchers will be showcasing their tablet app at the CeBIT computer expo in Hanover, March 16-20.
Video panoramas are created by combining the images recorded by a series of high-resolution cameras. Fraunhofer HHI's OmniCam system, for instance, uses 10 HD cameras. This leaves the technology capable of creating 360-degree panoramic images in real-time -- making it a fascinating proposition for covering live events. Last year for instance, Fraunhofer researchers recorded the soccer World Cup final between Germany and Argentina in Rio de Janeiro. They have also recorded the concert given by the Berlin Philharmonic on the occasion of the 25th anniversary of the fall of the Berlin Wall. The recordings have a resolution of 2000 x 10,000 pixels. "Sadly, none of us have a panoramic cinema at home, and the devices in our living rooms and in our pockets are simply not capable of processing this amount of data," explains Weissig.
The cameraman-spectator
What is possible, though, using currently available LTE networks, is to transmit individual segments of the panorama. "What we've done is split the panorama into a set number of segments. These segments are made available to each user concurrently, with the app selecting the segments needed to display the desired section of the panorama," says Weissig. This approach makes it technically feasible for a very large group of people to use a panoramic image at the same time. Of course, they won't get the panorama at its full resolution, just the individual segments they choose rendered at the resolution of their device. "It's another step towards personalized television: users taking advantage of the 'second screen' to become their own cameraman and take over the footage, maybe by zooming in to a specific point within their chosen segment. Until now, the apps on the market have been able to offer only a selection of static camera angles, or else transmit a full panorama in HD definition," says Weissig.
Content providers and TV broadcasters also stand to profit, potentially offering the new capabilities as a service of their own. "We are already collaborating with partners who want to implement the technology themselves, for instance to improve the marketing of live concerts," says Weissig. The investment needed to produce panoramic recordings remains high -- but now that cost can be spread across a large number of users via the pricing of the app, which still remains reasonable for each individual user. The technology should be commercially available within the year. "In the meantime, we'll be polishing the product to improve transfer speeds," says Weissig.
But isn't there still a shortage of content? And aren't panoramic recordings still far too expensive? "The trend is clearly one towards extremely high resolutions -- just look at the new 4K TVs or the Japanese broadcasting corporation NHK's drive towards 8K resolution. Panorama technology is advancing too. In the future, there will be more content and more devices capable of displaying it. The Ultra-HD-Zoom app is a first application that we can expect to be available soon. As such, it's a pointer as to where panorama technology might go in the future," says Weissig.
At their CeBIT trade fair booth, the researchers have set up the whole scenario: Visitors can use the app to select a specific camera angle based on live footage. These are displayed as overview images on the right of the screen. If the visitor selects one of the OmniCam cameras, they are free to navigate the content themselves.

Big data allows computer engineers to find genetic clues in humans

Big data: It's a term we read and hear about often, but is hard to grasp. Computer scientists at Washington University in St. Louis' School of Engineering & Applied Science tackled some big data about an important protein and discovered its connection in human history as well as clues about its role in complex neurological diseases.


Through a novel method of analyzing these big data, Sharlee Climer, PhD, research assistant professor in computer science, and Weixiong Zhang, PhD, professor of computer science and of genetics at the School of Medicine, discovered a region encompassing the gephyrin gene on chromosome 14 that underwent rapid evolution after splitting in two completely opposite directions thousands of years ago. Those opposite directions, known as yin and yang, are still strongly evident across different populations of people around the world today.
The results of their research, done with Alan Templeton, PhD, the Charles Rebstock professor emeritus in the Department of Biology in the College of Arts & Sciences, appear in the March 27 issue of Nature Communications.
The gephyrin protein is a master regulator of receptors in the brain that transmit messages. Malfunction of the protein has been associated with epilepsy, Alzheimer's disease, schizophrenia and other neurological diseases. Additionally, without gephyrin, our bodies are unable to synthesize an essential trace nutrient.
The research team used big data from the International HapMap Project, a public resource of genetic data from populations worldwide designed help researchers find genes associated with human disease, as well as from the 1000 Genomes project, another public data source of sequenced human genomes. In total, they looked at the genetic data from 3,438 individuals.
When they analyzed the data, they made an interesting discovery in a sequence of markers, called a haplotype, enveloping the gephyrin gene: up to 80 percent of the haplotypes were perfect yin and yang types, or complete opposites of the other. They were able to trace the split back to what is known as the Ancestral haplotype, or that of the most recent common human ancestor.
"We observed that the Ancestral haplotype split into two distinct haplotypes and subsequently underwent rapid evolution, as each haplotype possesses about 140 markers that are different from the Ancestral haplotype," Climer says. "These numerous mutations should have produced a large number of intermediate haplotypes, but the intermediates have almost entirely disappeared, and the divergent yin and yang haplotypes are prevalent in populations representing every major human ancestry."
Using the data from the HapMap Project, they looked at the gephyrin region in several populations of people, including European, East and South Asian, and African heritage, and found variations in the haplotype frequencies of each of these populations. Those from African origin generally have more yang haplotypes, while those of European origin have more yin haplotypes. Those of Asian descent have nearly equal numbers of yin and yang haplotypes.
Humans carry pairs of chromosomes, and 30 percent of Japanese individuals carry two yin haplotypes or two yang haplotypes. Another 30 percent of these individuals possess both a yin and a yang haplotype, reflecting the roughly equal probability of inheriting either one.
To find this pattern within the huge datasets, the research team used a novel method to assess correlations between genetic markers called single nucleotide polymorphisms, or SNPs, which are variations in a DNA sequence that make humans different from each other.
The team's method, called BlocBuster, computes correlations between each pair of SNPs, then builds a network of those correlations. By observing the network, researchers can see clusters of correlated markers.
"For example, you could build a Facebook network using all of your Facebook friends," Climer says. "If two of your friends are friends with each other, you would connect them in the network. If you see that a cluster of people is interconnected with each other, they probably share something in common, such as a family relationship, a school, or some type of social interaction. Similarly, with an efficient algorithm and an adequate number of processors and time, we can look at every pair of SNPs, build these networks and observe clusters of interconnected SNPs.
"The BlocBuster approach is a paradigm shift from the conventional methods for genome-wide association studies, or popularly known as GWAS, where one or a few markers were examined at a time," Zhang says. "It is truly a data mining technique for big data like those from HapMap and 1000 Genomes projects."
The researchers also can design this approach to look at complex traits and diseases.
"BlocBuster is able to detect combinations of networked genetic markers that are characteristic of complex traits," Zhang says. "It is suitable for analyzing traits, such as body weights, which are determined by multiple genetic factors, and genetic patterns in populations, such as the yin-yang haplotypes we discovered."
Ultimately, they expect this method will shed light on the genetic roots of disease.
"Most complex diseases arise due to a group of genetic variations interacting together," Climer says. "Different groups of people who get a disease may be affected by different groups of variations. There's not enough power to see most of these intricate associations when looking at single markers one at a time. We're taking a combinatorial approach -- looking at combinations of markers together -- and we're able to see the patterns."

Next important step toward quantum computer

Physicists at the Universities of Bonn and Cambridge have succeeded in linking two completely different quantum systems to one another. In doing so, they have taken an important step forward on the way to a quantum computer. To accomplish their feat the researchers used a method that seems to function as well in the quantum world as it does for us people: teamwork. The results have now been published in the Physical Review Letters.
When facing big challenges, it is best to work together. In a team, the individual members can contribute their individual strengths – to the benefit of all those involved. One may be an absent-minded scientist who has brilliant ideas, but quickly forgets them. He needs the help of his conscientious colleague, who writes everything down, in order to remind the scatterbrain about it later. It's very similar in the world of quanta. There the so-called quantum dots (abbreviated: qDots) play the role of the forgetful genius. Quantum dots are unbeatably fast, when it comes to disseminating quantum information. Unfortunately, they forget the result of the calculation just as quickly – too quickly to be of any real use in a quantum computer.
In contrast, charged atoms, called ions, have an excellent memory: They can store quantum information for many minutes. In the quantum world, that is an eternity. They are less well suited for fast calculations, however, because the internal processes are comparatively slow. The physicists from Bonn and Cambridge have therefore obliged both of these components, qDots and ions, to work together as a team. Experts speak of a hybrid system, because it combines two completely different quantum systems with one another.
Absent-minded qDots
qDots are considered the great hopes in the development of quantum computers. In principle, they are extremely miniaturized electron storage units. qDots can be produced using the same techniques as normal computer chips. To do so, it is only necessary to miniaturize the structures on the chips until they hold just one single electron (in a conventional PC it is 10 to 100 electrons).
The electron stored in a qDot can take on states that are predicted by quantum theory. However, they are very short-lived: They decay within a few picoseconds (for illustration: in one picosecond, light travels a distance of just 0.3 millimeters). This decay produces a small flash of light: a photon. Photons are wave packets that vibrate in a specific plane – the direction of polarization. The state of the qDots determines the direction of polarization of the photon. "We used the photon to excite an ion", explains Prof. Dr. Michael Köhl from the Institute of Physics at the University of Bonn. "Then we stored the direction of polarization of the photon".
Conscientious ions
To do so, the researchers connected a thin glass fiber to the qDot. They transported the photon via the fiber to the ion many meters away. The fiberoptic networks used in telecommunications operate very similarly. To make the transfer of information as efficient as possible, they had trapped the ion between two mirrors. The mirrors bounced the photon back and forth like a ping pong ball, until it was absorbed by the ion. "By shooting it with a laser beam, we were able to read out the ion that was excited in this way", explains Prof. Köhl. "In the process, we were able to measure the direction of polarization of the previously absorbed photon". In a sense then, the state of the qDot can be preserved in the ion – theoretically this can be done for many minutes.
This success is an important step on the still long and rocky road to a quantum computer. In the long term, researchers around the world are hoping for true marvels from this new type of computer: Certain tasks, such as the factoring of large numbers, should be child's play for such a computer. In contrast, conventional computers find this a really tough nut to crack. However, a quantum computer displays its talents only for such special tasks: For normal types of basic computations, it is pitifully slow.

A robot prepared for self-awareness: Expanded software architecture for walking robot Hector

A year ago, researchers at Bielefeld University showed that their software endowed the walking robot Hector with a simple form of consciousness. Their new research goes a step further: they have now developed a software architecture that could enable Hector to see himself as others see him. "With this, he would have reflexive consciousness," explains Dr. Holk Cruse, professor at the Cluster of Excellence Cognitive Interaction Technology (CITEC) at Bielefeld University. The architecture is based on artificial neural networks. Together with colleague Dr. Malte Schilling, Prof. Dr. Cruse published this new study in the online collection Open MIND, a volume from the Mind-Group, which is a group of philosophers and other scientists studying the mind, consciousness, and cognition.
Both biologists are involved in further developing and enhancing walking robot Hector's software. The robot is modelled after a stick insect. How Hector walks and deals with obstacles in its path were first demonstrated at the end of 2014. Next, Hector's extended software will now be tested using a computer simulation. "What works in the computer simulation must then, in a second phase, be transferred over to the robot and tested on it," explains Cruse. Drs. Schilling and Cruse are investigating to what extent various higher level mental states, for example aspects of consciousness, may develop in Hector with this software -- even though these traits were not specifically built in to the robot beforehand. The researchers speak of "emergent" abilities, that is, capabilities that suddenly appear or emerge.
Until now, Hector has been a reactive system. It reacts to stimuli in its surroundings. Thanks to the software program "Walknet," Hector can walk with an insect-like gait, and another program called "Navinet" may enable the robot to find a path to a distant target. Both researchers have also developed the software expansion programme "reaCog." This software is activated in instances when both of the other programmes are unable to solve a given problem. This new expanded software enables the robot to simulate "imagined behaviour" that may solve the problem: first, it looks for new solutions and evaluates whether this action makes sense, instead of just automatically completing a pre-determined operation. Being able to perform imagined actions is a central characteristic of a simple form of consciousness.
In their previous research, both CITEC researchers had already determined that Hector's control system could adopt a number of higher-level mental states. "Intentions, for instance, can be found in the system," explains Malte Schilling. These "inner mental states," such as intentions, make goal-directed behaviour possible, which for example may direct the robot to a certain location (like a charging station). The researchers have also identified how properties of emotions may show up in the system. "Emotions can be read from behaviour. For example, a person who is happy takes more risks and makes decisions faster than someone who is anxious," says Holk Cruse. This behaviour could also be implemented in the control model reaCog: "Depending on its inner mental state, the system may adopt quick, but risky solutions, and at other times, it may take its time to search for a safer solution."
To examine which forms of consciousness are present in Hector, the researchers rely on psychological and neurobiological definitions in particular. As Holk Cruse explains, "A human possesses reflexive consciousness when he not only can perceive what he experiences, but also has the ability to experience that he is experiencing something. Reflexive consciousness thus exists if a human or a technical system can see itself "from outside of itself," so to speak."
In their new research, Cruse and Schilling show a way in which reflexive consciousness could emerge. "With the new software, Hector could observe its inner mental state -- to a certain extent, its moods -- and direct its actions using this information," says Malte Schilling. "What makes this unique, however, is that with our software expansion, the basic faculties are prepared so that Hector may also be able to assess the mental state of others. It may be able to sense other people's intentions or expectations and act accordingly." Dr. Cruse explains further, "the robot may then be able to "think": what does this subject expect from me? And then it can orient its actions accordingly."

Computers that mimic the function of the brain

Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons.
"Computers are very impressive in many ways, but they're not equal to the mind," said Mark Hersam, the Bette and Neison Harris Chair in Teaching Excellence in Northwestern University's McCormick School of Engineering. "Neurons can achieve very complicated computation with very low power consumption compared to a digital computer."
A team of Northwestern researchers, including Hersam, has accomplished a new step forward in electronics that could bring brain-like computing closer to reality. The team's work advances memory resistors, or "memristors," which are resistors in a circuit that "remember" how much current has flowed through them.
The research is described in the April 6 issue of Nature Nanotechnology. Tobin Marks, the Vladimir N. Ipatieff Professor of Catalytic Chemistry, and Lincoln Lauhon, professor of materials science and engineering, are also authors on the paper. Vinod Sangwan, a postdoctoral fellow co-advised by Hersam, Marks, and Lauhon, served as first author. The remaining co-authors--Deep Jariwala, In Soo Kim, and Kan-Sheng Chen--are members of the Hersam, Marks, and/or Lauhon research groups.
"Memristors could be used as a memory element in an integrated circuit or computer," Hersam said. "Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if you lose power."
Current computers use random access memory (RAM), which moves very quickly as a user works but does not retain unsaved data if power is lost. Flash drives, on the other hand, store information when they are not powered but work much slower. Memristors could provide a memory that is the best of both worlds: fast and reliable. But there's a problem: memristors are two-terminal electronic devices, which can only control one voltage channel. Hersam wanted to transform it into a three-terminal device, allowing it to be used in more complex electronic circuits and systems.
Hersam and his team met this challenge by using single-layer molybdenum disulfide (MoS2), an atomically thin, two-dimensional nanomaterial semiconductor. Much like the way fibers are arranged in wood, atoms are arranged in a certain direction--called "grains"--within a material. The sheet of MoS2 that Hersam used has a well-defined grain boundary, which is the interface where two different grains come together.
"Because the atoms are not in the same orientation, there are unsatisfied chemical bonds at that interface," Hersam explained. "These grain boundaries influence the flow of current, so they can serve as a means of tuning resistance."
When a large electric field is applied, the grain boundary literally moves, causing a change in resistance. By using MoS2 with this grain boundary defect instead of the typical metal-oxide-metal memristor structure, the team presented a novel three-terminal memristive device that is widely tunable with a gate electrode.
"With a memristor that can be tuned with a third electrode, we have the possibility to realize a function you could not previously achieve," Hersam said. "A three-terminal memristor has been proposed as a means of realizing brain-like computing. We are now actively exploring this possibility in the laboratory."

Improving the quality of medical care using computer understanding of human language

Timothy Imler, M.D., of the Regenstrief Institute and Indiana University School of Medicine, will present "Quality Monitoring Utilizing Natural Language Processing" at the 2015 Healthcare Information and Management Systems Society's Conference and Exhibition in Chicago on April 14, a gathering of thousands of health care, government, public health, industry and other health information technology leaders.
"There is a vast quantity of medical knowledge in text documents. Employing natural language processing, a linguistic technique using sophisticated software to extract meaning from spoken or written language, allows the computer to 'understand' medical records in ways that were not possible previously," Dr. Imler, a Regenstrief Institute investigator and Indiana University School of Medicine assistant professor of medicine in the Division of Gastroenterology and Hepatology, said.
"For example, with natural language processing we can use computers to track quality of care at the individual provider, service line and health-care systems levels. That gives us a practical, new tool for enhancing quality," Dr. Imler said.
The Regenstrief Institute's Center for Biomedical Informatics is an innovator in the clinical application of natural language processing. At the HIMSS gathering Dr. Imler will explore how the utilization of natural language processing can be implemented to improve care. A gastroenterologist and informatician, he recently published a multi-center study of the impact of natural language processing on colonoscopy, including quality and completeness of reporting of the findings, as well as recommendations for the surveillance interval.
"Text documents have historically been black boxes to researchers and quality experts, because they had to be laboriously evaluated by hand, one at a time," Dr. Imler said. "With natural language processing we can utilize mountains of data we already have to improve care and, as appropriate, provide information to those who need it including physicians, payers and accountable care organizations.
"We want access to the treasure trove of information contained in narratives and we have already accomplished that in Indiana through the Indiana Network for Patient Care, developed by the Regenstrief Institute and now supported by the Indiana Health Information Exchange."
The Indiana Network for Patient Care currently has more than 90 million documents with narratives on 3.3 million patients. With the application of natural language processing, these documents are read and analyzed by computers, and interpreted by clinicians and quality control experts.
To date most hospital systems do not use natural language processing, according to Dr. Imler, a deficiency he says can be corrected with expertise but without much additional expense, as the difference in cost of processing one text field and a million is negligible.

who is ceo

A new University of Washington study adds to those efforts by assessing how accurately gender representations in online image search results for 45 different occupations match reality.
In a few jobs -- including CEO -- women were significantly underrepresented in Google image search results, the study found, and that can change searchers' worldviews. Across all the professions, women were slightly underrepresented on average.
The study also answers a key question: Does the gender ratio in images that pop up when we type "author," "receptionist" or "chef" influence people's perceptions about how many men or women actually hold those jobs?
In a paper to be presented in April at the Association for Computing Machinery's CHI 2015 conference in South Korea, researchers from the UW and the University of Maryland, Baltimore County found that manipulated image search results could determine, on average, 7 percent of a study participant's subsequent opinion about how many men and women work in a particular field, compared with earlier estimates.
"You need to know whether gender stereotyping in search image results actually shifts people's perceptions before you can say whether this is a problem. And, in fact, it does -- at least in the short term," said co-author Sean Munson, UW assistant professor of human centered design and engineering.
The study first compared the percentages of women who appeared in the top 100 Google image search results in July 2013 for different occupations -- from bartender to chemist to welder -- with 2012 U.S. Bureau of Labor statistics showing how many women actually worked in that field.
In some jobs, the discrepancies were pronounced, the study found. In a Google image search for CEO, 11 percent of the people depicted were women, compared with 27 percent of U.S. CEOs who are women.
Twenty-five percent of people depicted in image search results for authors are women, compared with 56 percent of actual U.S. authors.
By contrast, 64 percent of the telemarketers depicted in image search results were female, while that occupation is evenly split between men and women.
Yet for nearly half of the professions -- such as nurse practitioner (86 percent women), engineer (13 percent women), and pharmacist (54 percent women) -- those two numbers were within five percentage points.
"I was actually surprised at how good the image search results were, just in terms of numbers," said co-author Matt Kay, a UW doctoral student in computer science and engineering. "They might slightly underrepresent women and they might slightly exaggerate gender stereotypes, but it's not going to be totally divorced from reality."
When the researchers asked people to rate the professionalism of the people depicted in top image search results, though, other inequities emerged. Images that showed a person matching the majority gender for a profession tended to be ranked by study participants as more competent, professional and trustworthy. They were also more likely to choose them to illustrate that profession in a hypothetical business presentation.
By contrast, the image search results depicting a person whose gender didn't match an occupational stereotype were more likely to be rated as provocative or inappropriate.
"A number of the top hits depicting women as construction workers are models in skimpy little costumes with a hard hat posing suggestively on a jackhammer. You get things that nobody would take as professional," said co-author Cynthia Matuszek, a former UW doctoral student who is now an assistant professor of computer science and electrical engineering at University of Maryland, Baltimore County.
Most importantly, researchers wanted to explore whether gender biases in image search results actually affected how people perceived those occupations.
They asked study volunteers a series of questions about a particular job, including how many men and women worked in that field. Two weeks later, they showed them a set of manipulated search image results and asked the same questions.
Exposure to skewed image search results did shift their estimates slightly, accounting for 7 percent of those second opinions. The study did not test long-term changes in perception, but other research suggests that many small exposures to biased information over time can have a lasting effect on everything from personal preconceptions to hiring practices.
The measured effect raises interesting questions, the researchers say, about whether image search algorithms should be changed to help counter occupational stereotypes.
"Our hope is that this will become a question that designers of search engines might actually ask," Munson said. "They may come to a range of conclusions, but I would feel better if people are at least aware of the consequences and are making conscious choices around them."

technology upliftment

In an era of slick gadgets, PCs are the dinosaurs, ensnared in wire clutter, sporting tired 2D cameras and stricken with the occasional blue screen of death. Technology coming up in 2015, though, is set to make PCs more interactive, fun and perhaps nosier than you'd like them to be.Apple's iPad changed the way people viewed computers and spurred PC innovation. Hardware makers drew ideas from mobile devices, gaming consoles and even 3D printers to rethink the PC, and the resulting new technologies will have a profound effect on how laptops and desktops are used next year and into the future.Perhaps the most interesting idea is Intel's "wire-free" PC, in which wireless technology will replace display, charging and data transfer cables. Chip maker Intel next year will show an experimental laptop that has no ports, and relies completely on wireless technology to connect to monitors and external storage devices.Interactive computers will have 3D cameras that behave more like eyes, with the ability to recognize objects and measure distances. Sensory input through sound, voice and touch will help PCs respond to and anticipate our needs.And like every year, PCs will get thinner, faster, lighter and have longer battery life. Games and movies will look smashing with higher-resolution displays. The new technology, however, will come with a price.Here are six disruptive technologies that could change the face of computing in the next year:Wireless chargingPlace a laptop on a table, and it'll automatically start charging. No wires needed, no need to carry a power brick. That's how Intel views wireless charging for laptops, which could become a reality next year. Intel wants to make wireless chargers as easy to find as a Wi-Fi signal, and wants to bring the technology to cafes, restaurants, airports and other public places so laptops can be recharged without power adapters. The first laptops with wireless charging could come out next year, and Intel has shown a few prototypes laptop being recharged on a table.Intel is pushing wire-free PCs
Mark Hachman
Intel plans to make the wire-free future of the PC a reality as early as the first quarter of 2015.

The latest brain model yet

One little doughnut — a lab-engineered protein doughnut — could help researchers cure Alzheimer’s and Parkinson’s and restore a traumatized brain. In August, scientists at Tufts University unveiled a new 3-D human brain “doughnut” model that accurately imitates brain function and injury response.
brain_sponge
These multicolored protein ring doughnuts mimic brain regions.
David Kaplan/Tufts University
David Kaplan and his team built their model by constructing a collagen gel-filled protein ring that cultures rat neurons. Unlike older, simpler models, this one’s compartmentalized architecture (indicated by the multicolored rings) simulates the regions of the brain, and the chemically neutral protein allows neurons to connect realistically.
After growing the model for two months — previous ones lasted a few weeks at best — the team dropped different weights on the system to simulate brain trauma, like a concussion. Amazingly, all measured responses mimicked a human brain, making it the most realistic, responsive 3-D human brain model to date. And its longevity will allow researchers to introduce diseased cells to the model to better understand how brain diseases play out over time.
Although it’s impressive, the model has limitations. It absorbs oxygen and nutrients through cell membranes, not blood vessels, shortening its life span. Researchers want to integrate a humanlike vascular component to “feed” the model and extend its already superior life span.

The science fraud

The suicide of a stem cell researcher in Japan last summer prompted a great deal of soul searching in science. Yoshiki Sasai’s death came after a scandal involving two papers retracted for fraud — the most high-profile case of scientific misconduct in 2014. But it was far from the only one.
Serious questions were also raised about stem cell research by Harvard’s Piero Anversa. We learned more about Cory Toth, a former diabetes researcher at the University of Calgary, whose lab fabricated data in nine published articles. And we saw the discovery of an apparent ring to generate positive assessments, aka peer reviews, of submitted manuscripts, 60 of which wound up being retracted.
It might seem, then, that 2014 was an annus horribilis in the world of science fraud. For many in the public, which pays for much of this research in tax dollars, news of these events may have come as a rude awakening. But at Retraction Watch, when we see and hear that kind of commentary, we feel a little like the police captain in Casablanca who proclaims he’s “shocked, shocked!” to learn there is gambling at Rick’s, only to be handed his winnings a moment later.
We started Retraction Watch in 2010, and every year since then, we’ve witnessed at least a few cases big enough to warrant headlines: anesthesiologist Yoshitaka Fujii, record holder for retractions at 183; Diederik Stapel, whose groundbreaking social psychology work was almost entirely fabricated; Joachim Boldt, the German critical-care specialist and previous retraction record holder. The list goes on.
So what can we learn from all these scandals? Are scholarly journals and their editors, who insist that their peer-reviewed studies are more trustworthy than everything else the public hears about science, little better than carnival barkers hawking bogus trinkets? 
The short answer is no. Journals and publishers are, for the most part, doing a good job. They increasingly use software to screen manuscripts for plagiarism, and some even employ statistics experts to review papers for signs of data fabrication. In the Fujii case, for example, the British journal Anaesthesia had a stats guru analyze Fujii’s articles. His verdict: The chances that the data were valid were infinitesimal.
It’s impractical to apply this sort of extensive scrutiny to every one of the nearly 2 million manuscripts submitted each year. But conducting statistical reviews of papers that get flagged during the editorial process — or, perhaps more important, after publication — is an achievable goal, one that would make a significant contribution to the integrity of the scientific literature. 
In fact, post-publication peer review is an emerging phenomenon in scholarly publishing. On sites like PubPeer, researchers critique papers, pointing out everything from errors or other problem spots to potentially manipulated images and other evidence of misconduct. One of the reasons there seems to be more fraud is simply that we’re better at finding it.
Many scientists, journal editors and publishers have reacted warily to PubPeer and its ilk. Some contend that the anonymity of the post-publication reviewers breeds witch hunts and harms innocent bystanders. But the sites are doing a service by catching horses even though they have already left the barn. In addition, small but growing efforts have also begun, to test whether research holds up by repeating — in scientific parlance, replicating — experiments in cancer and psychology research. 
All of these efforts underscore a critical point, one that science may need some time to embrace: The paper is not sacrosanct. It does not come into the world like a flawless, shining deity immune to criticism or critique. If more scientists come to think of a new publication as a larval stage of scientific knowledge and if fewer schools and funding agencies prize the high-profile journal article — basing tenure, grant and promotions on it — then researchers will feel less pressure to cut corners and manufacture dramatic results.
Reporting on cases in which scientists have committed fraud can be disheartening, even heartbreaking. But for every fraudster out there, we know there are dozens of scientists who are quick to correct the record when they discover problems in their work. And they rail against the reluctance of many of their peers to do the same. Sadly, many scientists are worried that acknowledging any fraud in their midst will discourage funding.
We are guided by the old chestnut: The cover-up is worse than the crime. If the growing awareness of an ongoing problem has led to more transparency, the scientific process, and the public who benefits from the knowledge it generates, will be better off.
[This article originally appeared in print as "The Year in Fraud."]

Ebola Attack

A burial team member removes the body of a woman who died the night before. She had been awaiting treatment in an Ebola holding center in Monrovia, Liberia, in August.
Kieran Kesner
It started quietly, deep in the forest of southern Guinea.
In Meliandou, a village in the prefecture of Guéckédou, a 2-year-old boy contracted the virus, possibly from a fruit bat. The child’s flu-like symptoms at first would have caused little alarm. But before long he began vomiting, and his stool was black with blood.
The young boy died on Dec. 6, 2013. By New Year’s Day, his mother, sister and grandmother were dead. A month later, so were two mourners who had attended the grandmother’s funeral, a local nurse and the village midwife. Before they died, the two mourners and the midwife carried the virus to nearby villages and to the region’s hospital, infecting others.
Thus began the worst Ebola outbreak the world has ever seen. 
By last summer, people across Guinea, Sierra Leone and Liberia had retreated to their homes, unwilling or unable to get to clinics where they’d seen their friends and relatives go, never to return. Health care workers labored in sweltering facilities. More than 200 succumbed to the virus, and frightened staff fled their positions, forcing clinics to turn patients away.
Since the Ebola virus first emerged in 1976 in the Democratic Republic of Congo (then Zaire), flare-ups had occurred mostly in isolated Central African forest villages. News of Ebola’s lethality and its horrific symptoms — vomiting, diarrhea, and sometimes bleeding from the eyes, nose and other orifices — bred morbid curiosity and fear, and the gripping story of its emergence inspired the best-selling book The Hot Zone.
Despite Ebola’s ferocity, previous outbreaks infected a few hundred people at most. Ebola is highly infectious — just a few particles of the virus in a drop of sweat or blood can cause disease — and health workers must don personal protective suits and quarantine patients in isolation units. 
But the virus is not especially contagious. It’s transmitted only via close contact with a patient’s bodily fluids, excretions, soiled clothing or bedding. And patients are contagious only when they are palpably and visibly ill, making carriers easy to spot. All this helped health workers corral past outbreaks, and before long the virus would retreat into the forest.
Not this time. Why, in 2014, did Ebola spread sickness and death through West Africa and beyond — and how can science help stop it?
A Perfect Storm
Robert Garry knew early on that trouble was brewing. He was working at Kenema Government Hospital in Sierra Leone last March when he heard reports of Ebola in neighboring Guinea. He was there studying Lassa fever, a cousin of Ebola. At the time, 112 people had been infected and 70 had died from Ebola, but the World Health Organization (WHO) said Guinea’s outbreak was “relatively small still,” and Guinean officials said it was under control.
Garry, a virologist from Tulane University in New Orleans, knew a thing or two about how outbreaks of Lassa, Ebola and related viruses play out. “Lassa simmers,” he says. “Ebola explodes.”
And as the international community dozed, an outbreak detonated. Through the spring and summer of 2014, Ebola swept through Guinea and into its neighbors, Sierra Leone and Liberia.
The 2014 Ebola epidemic, the first in West Africa, was driven by a confluence of factors: poverty and lack of health care infrastructure; traditional burial practices that helped spread the disease; deep-seated mistrust of Westerners, health care workers and authorities; and the region’s growing mobility. 
In May, women from Sierra Leone attended the funeral of a traditional healer who had treated Ebola patients across the border in Guinea. One of the mourners, a young pregnant woman, showed up at Kenema Government Hospital, where she miscarried and was diagnosed with Ebola. In all, 14 of the mourners were infected and spread the virus to their contacts in Sierra Leone, stoking that country’s epidemic, according to a DNA-sequencing study that Garry, renowned Sierra Leonean virologist and doctor Sheik Humarr Khan, Harvard virologist Stephen Gire and 55 colleagues later published in Science.
Garry and some colleagues, including Khan, readied the Kenema Government Hospital for Ebola patients. Soon he returned to the U.S., where he contacted federal officials to express his fears about a brewing epidemic. He received only “polite responses,” he says.
Meanwhile, in Monrovia, Liberia’s capital, physician and Christian missionary Kent Brantly began setting up that country’s first Ebola isolation ward. Before long the disease took hold in Monrovia’s dense slums, filling Brantly’s ward to overflowing. 
“The disease was spiraling out of control, and it was clear we were not equipped to fight it effectively on our own,” Brantly testified later to the U.S. Senate. He said he and his colleagues “began to call for more international assistance, but our pleas seemed to fall on deaf ears.”
On Aug. 8, WHO finally declared the epidemic a public health emergency of international concern. “No one was really imagining we would get to this situation,” WHO spokesman Tarik Jasarevic says. Within weeks, Liberia had surpassed Sierra Leone as the outbreak’s epicenter. By then, more than 2,400 had been infected and 1,346 had died.

Scientists Scramble
Ebola’s spread caught many scientists off guard at first. They knew that shortly after infection, the Ebola virus commandeers or kills immune cells, weakening the body’s defenses and letting the virus run wild. They also knew that the virus interferes with blood clotting, which leads to bleeding and, in many cases, multiple organ failure.
But no proven therapy or vaccine existed at first, in part because of long-standing funding shortfalls for diseases that mostly affect the developing world. By summer, however, researchers worldwide were racing to the lab to combat the epidemic.
At least four experimental Ebola drugs were in early stages of development. The first human safety trial on an Ebola drug began in January 2014 for TKM-Ebola, which contains snippets of RNA produced by Tekmira Pharmaceuticals that target three genes essential to viral replication. But it was ZMapp that grabbed the headlines. This cocktail of three antibodies, from San Diego-based Mapp Biopharmaceutical, binds to Ebola, neutralizes it and alerts the immune system to the infection. As of last summer, it had not been tested in people.
In July, Khan contracted Ebola. Physicians with the nonprofit group Doctors Without Borders, which led early efforts to combat the epidemic, agonized about whether to treat him with ZMapp. They feared that he might die, spurring even more mistrust of health care workers, and they decided not to. In late July, Khan died in a Doctors Without Borders clinic in Kailahun, Sierra Leone. He was 39.
Just days earlier in Monrovia, Brantly and a fellow American missionary, aid worker Nancy Writebol, were also diagnosed. They received a few of the ZMapp doses available and were then flown to Emory University Hospital in Atlanta. After several harrowing weeks in intensive care, they recovered.
By September, the first reports had appeared showing that ZMapp effectively fights Ebola in monkeys, and the U.S. government pledged $25 million to help Mapp Biopharmaceutical manufacture more of the drug, test it in clinical trials and get it approved for human use.
To protect people from infection, researchers also developed two different vaccines, each with a key Ebola protein sewn into a harmless virus. One of the vaccines, developed by scientists at the pharmaceutical giant GlaxoSmithKline and the National Institute of Allergy and Infectious Disease (NIAID), helped protect monkeys from Ebola infection 10 months after vaccination. NIAID then launched a clinical trial to test the vaccine’s safety, and GlaxoSmithKline committed to making 10,000 doses for health care workers by the end of 2014.
Too Little, Too Late
In September, the international community finally woke up to Ebola’s threat. President Barack Obama promised to send 3,000 troops to Liberia and build 17 treatment centers with 100 beds each. But by then, about 7,200 people had been infected, more than 3,300 had died, and the casualties were skyrocketing. Researchers at the Centers for Disease Control and Prevention built computer models, which predicted that, if a massive intervention failed to materialize, by January 2015 up to 1.4 million people could be infected.
The epidemic finally hit home for many Americans on Sept. 30, when Thomas Eric Duncan, a Liberian visiting Dallas, became the first person diagnosed on U.S. soil. He was isolated and treated intensively, but he succumbed just eight days later. Then two nurses who treated Duncan at a Dallas hospital were diagnosed, becoming the first-ever cases of Ebola transmission on American soil, and a different epidemic — this one of anxiety — swept the nation. “As long as the outbreak continues in Africa, we need to be on our guard,” CDC director Tom Frieden told reporters.
It will be difficult to eliminate the virus entirely, since Ebola lurks in animals and periodically jumps to humans, Harvard’s Gire says. But we can corral it by better diagnosing and treating infections, and by setting up labs on the ground to track emerging Ebola strains by sequencing their genomes. “Constant surveillance is the only way to be sure we know where the virus is going,” he says. “It’s imperative.” 

Tuesday, 7 April 2015

EIGHT PROBLEMS IN INDIAN EDUCATION

HISTORICALLY, three systems have served the educational needs of Indians: Bureau of Indian Affairs schools, parochial or mission schools and public schools. Recently, through the Office of Economic Opportunity, the tribes themselves established a fourth school system, primarily in the Headstart Program.
These systems—still involved in attempting to better the lot of the Indian—have had much experience in providing programs to meet Indians’ needs and have been in the business of education on and off reservations for many years. In spite of what they have attempted and of what contributions they have made, acute problems exist in the Indian education field.
And Indian education will not progress, develop or evolve into a dynamic field unless the problems inherent in it are identified and solved.
In an analysis of the situation, I have categorized these problems into eight broad areas, from "lack of money" to "too many instant Indian experts."
Lack of money. By far one of the most pressing problems is the unavailability of money or inadequate funding of Indian education programs or systems. The demand far exceeds the supply, and available monies are only for the most basic educational needs of the students . . . "the traditional curriculum." Very small amounts, if any, are available for innovative programs and ideas.
Without adequate funding, the ideology and philosophy of Indian education become so many words. The concept of Indian education faces a bleak future characterized by stagnation, insensitivity, inadequate facilities and personnel. Is this what we educators wish to be contented with?
The irrelevant curricula. just what do we mean by the often-repeated phrase, irrelevant curricula? My definition is that it is schools not doing their job in meeting the needs of their students—especially Indianstudents. This area encompasses four necessary corrections.
An Indian student presently is subjected to an educational system geared to the needs of the non-Indian student without any concern to unique problems and background of the Indian. Yes, the Indian must live in the white man’s world, but if he is to become a productive member of the human race, the schools must develop programs to meet his needs.
The American school curricula stresses values in direct contrast with the values held, in varying degrees, by the Indian. Such highly esteemed values as agressiveness, competition, individual personal gain, out-smarting your fellow man, and verbal ability and agility are taught the non-Indian youngster from the time he is able to comprehend. These values become the foundations of the American educational system. Thus, the Indian student is thrown into a foreign situation—he has no experiential background comparable to it and consequently, retardation is "built into" the educational program as far as the Indian is concerned.
Another aspect is the stress of the English language in the system. If educators would recognize that the English language is not the mother tongue of most Indian students, educational programming could become more relevant, meaningful and rewarding to the Indian student,
If curriculum experts would include courses reflecting the positiveness of the Indians’ contributions to the greater society, another correction would be made. It is not difficult to understand why the average Indian student has a negative self-concept: he is taught in a foreign classroom, by a teacher who is literally a foreigner, and in a foreign language that he comes from a people who were bloodthirsty, marauding killers, and that the only good Indian is a dead Indian. Correct this image by eliminating these teachings, and replacing them with more positive characteristics.
Education has directly contributed to the destruction of the institution of the family among Indians: To illustrate this engulfment rather than bridgment of parent and child, let me give the following example.
Fifth graders are studying the atom or atom bomb and its effect on society as a whole. If the Indian child seeks to understand the concept of the atom more fully in an inquiry at home, he will discover that his parents are unable to help him gain that understanding because there is no concept paralleling the atom in the Indian language. Instead of help or clarification, the child may receive some type of scolding. In the case of the non-Indian child, the parents may not know the answer, but they have other resources to which to turn—a neighbor, a set of reference books, a nearby library. Thus, the Indian child begins to question the intelligence of his parents, and when this happens, the parental role is threatened and weakened. This weakening continues as the child progresses through school because the parent falls further behind, as he is not keeping up with his child. Destruction of the family institution is therefore hastened.
Lack of qualified Indians in Indian education. By far the most glaring problem is the acute shortage of qualified Indians in Indian education. Materialistic gains, incentives and opportunities entice the qualified Indian educator away from this challenging field. There is much hard work and many challenges in Indian education: isolation, poor or inadequate facilities, eager but academically deprived students, but one’s ingenuity, creativity, patience and forbearance are put to a real test in facing these and other challenges. If Indian education is to meet the needs of the students, if it is to have the sensitivity required, if it is to be dynamic and viable, it must have more qualified Indian educators—it must reach the stage wherein it will challenge the Indian educator to take up arms to join its ranks and to improve its lot.
Insensitive school personnel. It is tragic that this exists in the 20th Century. Too many administrators and teachers are not knowledgeable about the American Indian. Whether it is attributable to apathy, indifference or design does not lessen the problem. If school personnel are truly educators, it behooves them to learn about the people they are teaching: To fail in this task is to fail to educate. The burden of this responsibility rests squarely on the shoulders of the educator, and the exercise of that responsibility is long overdue.
Differing expectations of education programs. As noted in the section on irrelevant curricula, the American educational system is foreign in concept, principle and objective to the Indian student. The thinking, attitudes and experiences of the non-Indian are the base of the value structure rather than the aspects of Indian culture. Thus the educational perspectives of the Indian are not considered. The Indian views education as providing him with immediate practical skills and tools, not a delayed achievement of goals or as means for a future gain.
Lack of involvement in and control of educational matters. The Indian has not been able to express his ideas on school programming or educational decision-making. When they have been expressed, his participation has been limited and restricted. If problems in Indian education are to be resolved, the Indian citizen must become involved. He needs to have more control in the programs to which his children are exposed, to have a say in what types of courses are in the curriculum, to help hire teachers, to establish employment policies and practices, and all of the other responsibilities vested in school administration—that of being on a Board of Education. There are working examples of Indian-controlled school boards. These dynamic systems point up the fact that Indians can handle school matters. It is time that more Indians became involved in such control.
Difficulties of students in higher education. Colleges and universities need to establish programs which can deal effectively with the problems and needs of the Indian student—if he is to remain in school. In general, the Indian student has an inadequate educational background as he may have been looked upon as less than college material in high school. He has unusual adjustment problems and usually inadequate financial help. It is time that more colleges and universities attempt to solve these development factors and provide a more successful educational experience for the Indian student.
Too many instant-Indian education experts. To the detriment of Indian education and its growth, each day sprouts more "instant Indian education experts," who do more damage than good. Usually, these experts have all the answers: they have completely identified the problems and have formulated solutions, but they leave it to the Indian to implement. Again, the Indian is given something to implement which he has had no part in formulating. These experts usually depend on superficial, shallow studies done in one visit to a reservation or school, or they depend on one or two conferences with Indians who have little or no knowledge of the critical problems confronting the Indian generally. Indian education can well do without these experts who cannot be reasoned with or who feel they know what is best for the Indian.

There may be other factors which contribute to the problems of Indian education, but these eight areas are, I think, contributing to the situation wherein Indian education is not realizing its full development.

Five entrepreneurs offering innovative solutions in rural India

Villages in India have spending power, but they also has some unique problems. What this combination has done is to stoke entrepreneurship among professionals aiming to offer solutions and tap into the rural opportunity... 

Tweaking technology is also making it possible for startups to offer new applications that suit rural consumers. It is the scale of the opportunity that is drawing scores of entrepreneurs to rural India. We take a look at five entrepreneurs who have are offering unique solutions: 

1) EVOMO Research & Advancement - Abhinav Kumar CEO, EVOMO 

Based in: Ahmedabad 

USP: Aims to replace non-licensed local transport vehicles 

Funding: Rs 5 lakh from NID 

What it does: Designs and makes lowcost rural utility vehicle 

As a young automobile engineer Abhinav Kumar dreamt of joining a professional racing team. But a casual visit to rural Uttar Pradesh, where he saw a range of locally manufactured vehicles being used to ferry people and goods, changed the 27-year-old's career ambitions. 

He realised there was consumer demand for a transport vehicle that was both affordable and reliable. Soon he quit his job at auto-parts maker, Sona Koyo Steering Systems to set up his own venture, Evomo, in 2010. 
Evomo's rural utility vehicle costs Rs 1.5 lakh, which is less than the price of a Tata Nano, dubbed the world's cheapest car. Kumar said he manages to keep costs low by using locally sourced material and drawing from global design ideas that are past the patent-protection stage. His target is to sell at least one vehicle in each of India's 6.5 lakh villages in the next five years. 

2) Ampere Vehicles 

Based in: Coimbatore 

What it does: Makes electric bikes 

USP: These bikes are used for local distribution by small entrepreneurs 

Target Revenue: Rs 100 crore in the next four years 

Funding: Rs 20 cr from Forum Synergies and Spain's Axon Capital 

In Coimbatore, electric-bike maker Ampere Vehicles is selling thousands of bikes being used by retailers to distribute water and milk in villages. Founded in 2008 by Hemalatha Annamalai, 45, a computer engineer, the company is expected to reach revenue of Rs 100 crore within the next four years. 
3) iKure Techsoft 

Based in: Kolkata 

What it does: Sets up rural health centres 

Target Revenue: Rs 1 crore this year 

Funding: Rs 45 lakh from Intellecap Impact Investment Network and Calcutta Angels; Rs 70 lakh from WEBEL 

Kolkata-based iKure Techsoft has built a network of rural health centres where doctors are available through the week and pharmacists dispense only accredited medicines. In addition the company has built a back-end software platform on which all health records are stored. This is used to centrally monitor key metrics such as doctors' attendance, treatment prescribed and pharmacy stock management. 
Sujay Santra, iKure's founder said the idea for the business came to him when he realised that his relatives and friends in a West Bengal village could not relate to his work at a US technology firm. "I was not doing anything which would impact them directly," said Santra, 36, who left Oracle to launch his healthcare venture. 

4) Aakar Innovations 

Based in: New Delhi 

What it does: Builds low-cost machines that produce sanitary napkins 

USP: The napkins are biodegradable 

Target revenue: Rs 60 lakh this year 

Funding: Rs 6.15 lakh loan from the NIF; Rs 3.6 lakh Mahindra 'Spark the Rise' grant 

5) nanoPix 

Based in: Hubli 

What it does: Image and video processing products for agriculture, healthcare 

USP: Machine vision-based blood smear analysis and automated cashew sorting 

Revenues: Rs 2.2 crore fiscal 2014 

Funding: Rs 80 lakh from friends, family; Rs 15 lakh loan from Deshpande Foundation 

Thirty-six-year-old Sasisekar Krish makes image and video processing products for agriculture and healthcare at his company nanoPix based in Karnataka's Hubli district. Farmers use his product to sort agriculture products like cashew by shape, size, colour and quality. The same technology also helps analyse blood smears to detect infectious diseases. 

nanoPix has already tied up with a few hospitals in rural Karnataka to use the product. To keep costs low, Krish, a former engineer at Wipro, has done away with expensive high-resolution cameras used in imaging technology. 

Instead he combines images from several low- cost cameras and uses a software algorithm to create three dimensional models of the objects to be analysed.