Frontiers of Brain Science, day one

I spent Thursday and Friday at a conference on the brain, Frontiers of Brain Science, sponsored by the Knight Science Foundation at MIT and the Kavli Foundation. Our host, Wade Roush, interim director of the Knight Science Journalism Fellowships, called the question of how the brain, a three-pound globe of water, nerve fibers, fatty acids and blood vessels, can form human consciousness one of the great unsolved mysteries of science.

I’ve parsed my notes from day one in this post. This is blurt, not synthesis. I hope not to misrepresent any of the ideas put forth, but please be aware that I have not fact-checked or sought independent verification for what was presented. This is straight chronology of the speakers, too; again, I am not synthesizing this into some news-like article (though if I were, I’d give it some variation of the old Star Trek tag: “Brain: The final frontier,” or “Scientists confess brain baffles them, but at least they still have jobs”).

We started with a look at a brain disorder. Autism researcher Helen Tager-Flusberg of Boston University has spent a decade trying to figure out how to diagnose autism earlier. Though children with autism can show signs of the disorder as young as one, diagnosis usually doesn’t happen until they are five or older; spotting it earlier would mean treatment more quickly, maybe when the brain is at its most plastic, and intervention presumably would have the most impact. Tager-Flusberg and her colleagues thought altered language development might be a key early diagnostic tool.

They thought wrong. Of the children they’ve studied, 70 percent who developed autism also acquired language skills. They have been studying younger siblings of kids with autism, who are considered high risk for also developing autism.

Being wrong is valuable for researchers, even if frustrating. It means language development is no longer a core symptom of autism. That means parents shouldn’t feel concerned about their children not beginning to babble at six months (she said they shouldn’t largely because babies don’t usually start to babble until 7.5 months). They should focus more on repetitive behavior or changes in social smiles and eye gaze after six months.

But even those aren’t conclusive. “You can’t just take a snapshot at a point in time and say whether they will develop autism,” she said.

Autism remains a mystery. “We don’t even know what the right stories are to be followed,” Tager-Flusberg said.

Treatment also remains vexing. “We have no drugs that do anything for autism,” she said. One issue is that while families with autistic children would love to be able to have their kids take a drug, it’s difficult to get them to sign up for trials. Drugs come with side effects. Autism drugs would alter infant brain development in unpredictable ways. That’s a big step for parents to take at an age where kids really can’t be effectively diagnosed. Tager-Flusberg thinks it unlikely that we will develop a simple predictive test, or if we will, “it’s beyond my lifetime.”

“Manage behavior. That’s all we have,” she said. She said behavior management can be very effective, but noted the obvious: it’s hard work.

In answer to some of our questions, she noted that autism diagnoses may be increasing in part because of a market substitution effect: insurers will pay for autism treatments, but not for many other kinds of behavioral disorders. She also said the modern educational focus on group work is worrisome. She said autism was unfamiliar to her as a child (also to me as someone born in 1964; I suspect I’m at least a decade younger than she is), but she can think of two boys in her elementary school who definitely had autism. “But they did perfectly fine in our elementary school,” she said. In part, she thinks that’s because no one had to work in groups. In contrast, she said “today if you don’t want to work with a group you are immediately shunted off to the school psychologist.”

Our second talk was on creating common sense AI, and could you reverse engineer it to create better artificial intelligence programs. The speaker was Josh Tenenbaum, part of the Center for Brains, Minds and Machines.

Tenenbaum started by pointing out that the brain is not a computer, while acknowledging “it’s by far the best metaphor we have.” It’s fun to listen to someone who just loves playing with ideas. He spent more time talking about other people than his own work. And he wanted us to understand that modern artificial intelligence is in a way just a symbol, an abstraction, hearkening back to Artistotle and the formalizing of logic. He had, in fact, a slide of symbolic logicians that went from Aristotle to modern AI. One of the photos of Alonzo Church, who developed the Lambda calculus, a simple universal abstraction formula useful for software development. Tenenbaum put Church forth as the Alan Turing of software. “A Turing machine is a universal piece of hardware, and Church did it for software,” Tenenbaum said. I don’t remember running across Church before, though I must have; Wikipedia says he taught philosophy and math at Princeton and UCLA. (He was a life-long Presbyterian, too, meaning Church attended.)

Tenenbaum also took us through early childhood development research, which offers a possible way to develop more efficient machine learning algorithms, and was just fun to learn about. One effort he highlighted was the never-ending language learner. You can help train it here (it needs some help; it felt 100 percent confident that Helsinki University of Technology was a sports team).

He told us to code if we wanted to write about neuroscience and computer science. He also said that for good insights on how the mind works we should read this recent New Yorker article on the forthcoming video game No Man’s Sky. (I have just started it and can’t say I see the connection one page in.) The video game story ties in with his own work because he’s trying to reverse engineer how the brain processes images and use it to create better predictive algorithms for computer vision programs. (See this report on a recent Tenenbaum co-authored paper on probabilistic programming.)

He also addressed the controversy over AI and whether it’s actually dangerous. While AI is very visible right now, thanks to things like IBM’s Watson, Google’s pagerank, Facebook photo tagging, self-driving cars, and Amazon’s recommendation engine, he said, “none of them are really intelligent. These systems are fragile, and it’s easy to get past the limits of their intelligence.”

I couldn’t help thinking about how frequently I hear of how many kinds of knowledge-work jobs are being automated, creating a sense that there might not be valid work for man people in the future. It’s hard to reconcile that with Tenenbaum’s assertion that “as an engineer I think we have to realize that the ways our brains and minds work…far exceed what computers can do.”

I found myself thinking that the stuff computers can’t be programmed to do often doesn’t pay well, or only pays well for freaks of nature or outliers, people with the right combination of skills and intangibles to be professional athletes or singers or CEOs (apologies to JP Morgan, who despised CEOs as functionaries).

Perhaps when the mundane can be automated, more humans will find themselves free to create things that stimulate, sustain and satisfy.

Our third speaker took the baton from Tenenbaum. Tomaso Poggio talked about what makes intelligence, and whether computer programs that can do things like beat humans at Atari game or add vision to cars are in fact intelligent. He seemed to think not, telling us they don’t do a good job with things almost any human can do, like read an expression on someone’s face and tell you what they are thinking, or easily identify objects like hats.

Poggio works on machine vision. I challenged him on whether vision is a facet of intelligence. Congenitally blind people still have intellects, I argued. He said vision and motor control are both very old and fundamental systems in the brain, and both represent a form of human intelligence. If we can understand these very old systems, it will help us understand how other parts of the cortex work.

That chart showed that computers are bettering humans at things like: Numerical computation, accounting, optimization, medical diagnostics, fight management, chess playing, loan approval, financial advice, trading, flight planning, search (Google vs. Yahoo, Google search is superhuman!) They are beginning to replace us at things like: piloting planes, driving a car, recognizing speech, answering English questions (Watson and Jeopardy) translating from one language to another, playing games. Endangered jobs include psychologists, economists, financial advisers and taxi drivers. But the unrest probably won’t start here. He thinks China, where thousands of human workers are being replaced by machines, may be ground zero for unrest over our automated future.

Want to tell your kids what to become? Gardeners, athletes, cooks, handyman/plumbers, good butlers and, fortunately for Poggio, scientists and engineers.

AI, it seems, will overwhelm most professions. But on the plus side, it could create smarter, more useful objects, better group decision making and help cure mental disease. I guess that means we’ll be unemployed, but not clinically depressed about it.

He had a few good pieces of trivia. The human brain has about as many neurons as the brains of 1 million flies, and it takes 100 milliseconds for light to go from our eyes to the visual cortex at the back of our brains. He also said neurons have much higher connectivity than transistors. There are three or four wires that come out of a transistor, thousand of ‘wires’ from a neuron. But neurons are slower, moving at 1 millisecond, or 1 kilohertz, versus a gigahertz for transistors.

Our final speaker from day one was Lisa Feldman Barrett, a professor at Northeastern. Her goal was to take on the idea that emotions ‘live’ in certain parts of the brain, a version of “Essentialism,” or form fits function. The amygdala, for instance, is seen as the area of the brain that determines fear, and the insula disgust. She says this is not true, that there are core networks for emotion running across the brain.

In doing so, she also took on Charles Darwin himself. In 1872 he published The Expression of The Emotions in Man and Animals, a book Barrett called the Essentialist Bible, though it directly contradicts Darwin’s own The Origin of Species. The poet Ruth Padel (Darwin’s great-great-granddaughter) used these lines in her poem Muscle at the Corner of the Mouth: “He lays it out in sections like the segments of a worm. ‘The links are wonderful which connect effect with cause.’ Anger, painting cells of the skin with scarlet…”

I have never thought of Darwin as a leading figure in cognitive psychology, and I made a flippant remark that nobody else had either. Flip remarks often flop, and this one certainly did. One of the other attendees tweeted me a link showing the book had been cited 11,000 times in scholarly papers.

That’s what Barrett’s theory is up against. She’s also up against Hollywood. She argues the new movie Inside Out is purely Essentialism, and purely wrong. Instead, Barrett argues that specific parts of the brain do not control specific emotions. She’s reviewed more than 400 papers from the last five years. Among her findings are that while the amygdala is activated during fear, but, said Barrett, “only 30% of experiments showed more activity (than you’d expect by chance), not what you’d expect if it is the home of fear.”

In fact, she said the amygdala activates for almost every type of task you can scan a brain for.

She had a number of other examples, but the Kuhnian metapoint was clear: she’s arguing that we need to see a paradigm shift.

Thus concluded the formal part of day one.

Leave a Reply