IUBio

advances in thought inference, lie detection

Allen L. Barker alb at datafilter.com
Fri Nov 7 13:53:29 EST 2003



[Here is a _USA Today_ article about advances in open research in lie
detection.  This is research beyond the polygraph.  Such things have
been studied in secret for decades, but the results there are kept
classified to conceal the methods and to keep them deniable (especially
when used nonconsensually on citizens to violate their human rights).
If you don't think the black-budget programs have been all over this
for years with classified-level sensors, etc., then I have a median
strip to sell you at a good price.  Here's just one article describing
earlier open research:
    http://www.datafilter.com/mc/machinesThatReadMinds.html
See also
    http://www.datafilter.com/mc/thoughtInference.html
for many related links.

Notice that the way the problem is being cast here is as a
classification problem: true or false; truth-telling or lying.  The
more general problem is of "thought inference," where you would try to
infer anything you could about the person's thought process.  That
might extend to trying to externally reproduce a person's subvocalized
internal dialog, for example.  In a classification framework you have
simplified the problem in some ways, but perhaps over-simplified it in
others.  In many settings it is not even clear exactly what is a lie
and what is not.  That is a cultural and linguistic designation that
you are trying to correlate to biology.  As such, there will always be
"false positives" at some level.  So if you insist that something is a
"lie detector" only if it works 100% of the time then such a thing is
not possible.

Some of the ethical questions regarding lie detectors, then, must
deal with the reality of false positives.  This will arise even in
consensual applications of lie detectors.  Even more troubling ethical
and human rights questions arise when you consider nonconsensual or
even unwitting applications of the technology.  What right do you have
to the privacy of your own thoughts?  Is there any situation where you
implicitly give up that expectation of privacy for your formerly
private thoughts?  The article refers to an ethicist who suggests that
by going to an airport you would have implicitly given up the
expectation of privacy for your thoughts.  I strongly disagree, but at
least he is openly discussing the ethical issues related to the
technology.  Too often the approach is that technologies which
"officially don't exist" therefore cannot cause ethical and human
rights abuse problems.  (Even if you *do* consent, who "owns" your
thoughts?  What can be done with the information?)

The technology is improving every day, both in the open sector and in
the secret black-budget programs.  So this issue is *not* going to go
away.  It is only going to get more prevalent.  Not every technology
should be used just because it is possible to do so in a given
situation.  If I have the technology to shoot you, that does not mean
that I can -- either ethically or legally.  Even if you leave your
door unlocked, that does not mean that I can enter your home and do as
I please.]


--------------------------------


Terrorism lends urgency to hunt for better lie detector
http://www.usatoday.com/tech/news/techpolicy/2003-11-04-lie-detect-tech_x.htm
By Richard Willing, USA TODAY
Posted 11/4/2003 7:48 PM, Updated 11/5/2003 11:31 AM

PHILADELPHIA -- In a quiet corner of the University of Pennsylvania
campus, professor Britton Chance is using near-infrared light to peek
at lies as they form in the brains of student volunteers.

Eventually, Chance hopes to see something else: a day when a device
like his replaces the old, often inaccurate polygraph as the best way
for the U.S. government to detect lies told by spies, saboteurs and
terrorists.

Chance is among dozens of university and government researchers who
have invigorated the hunt for a better lie detector, an effort that
has been made more urgent by America's focus on national security
since the terrorist attacks of Sept. 11, 2001.

In labs across the nation, researchers are using technologies
originally developed to examine diseases, brain activity, obesity and
even learning disorders to try to solve some of the mysteries of human
conduct. The provocative idea behind some of the research is to go
beyond measuring the anxiety of a liar -- as polygraphs try to do --
and to catch the lies as they form in the human brain.

"We need something; we have a country under stress" because of
increased fears of terrorism, says Chance, 90, a biochemist and
engineer who helped to develop military radar during World War II. "It
might be fixed by finding out what people are thinking about."

Even its staunchest defenders doubt that the polygraph is up to the
job. Invented in 1915, the device uses wires, cuffs and a chest
harness to measure changes in breathing, perspiration and heart
rate. The presumption behind polygraph tests is that such changes can
be brought on by the stress of telling a lie.

But researchers have long questioned the polygraph's accuracy, in part
because the test itself can make a person nervous enough to skew the
results. In criminal cases, the accuracy of such tests can vary
widely. Courts in only one state, New Mexico, routinely accept
polygraph results as evidence.

And security screeners who use the machine to try to pick out would-be
terrorists or spies have a more difficult challenge, polygraph critics
say. Without details of a specific crime or security violation to ask
about, polygraphs miss real spies and sometimes implicate innocent
people.

Former CIA officer Aldrich Ames, who spied for the Soviet Union, and
former Defense Intelligence Agency analyst Ana Belen Montes, who spied
for Cuba, both passed polygraphs.

The polygraph is "a technology under duress," says Frank Horvath, a
Michigan State University professor of criminology and a Defense
Department adviser. "The question is: Is there some way better, and
how do you find it?"

The Defense Department, the FBI and the CIA are among the
U.S. agencies trying to answer that.

Still no proven way

The Defense Department's Polygraph Institute at Fort Jackson, S.C., is
financing at least 20 projects aimed at finding a better lie
detector. Another Pentagon office, the Defense Advanced Research
Projects Agency, is exploring magnetic resonance imaging (MRI) and
other technologies. The FBI and CIA are backing more research.

Because much of the work is secret, it is difficult to estimate how
much is being spent.

All the projects are in their early stages, and they are shadowed by a
glaring fact: Scientists still haven't proven that there is a
scientific way to catch a liar. If a device such as Chance's were to
become the standard, a range of ethical and legal questions would pop
up over how it should be used.

For now, government examiners continue to rely on the old device.

The Polygraph Institute's most recent annual report says it gave
11,566 polygraph tests in fiscal 2002 for the Defense Department and
other U.S. agencies, slightly more than the average number for the
past five years.

About three-quarters of those were security screenings aimed at
weeding out potential spies and terrorists. All but 20 test-takers
were cleared; as of September 2002, the last time the institute issued
a report, those people were being investigated.

The institute's tests do not include hundreds, perhaps thousands, of
tests done on FBI, CIA and National Security Agency employees and
applicants by examiners within those agencies. Those numbers are
classified.

Developing an alternative to the polygraph "isn't going to be solved
in the short term," says Stephen Fienberg, a psychology professor at
Carnegie Mellon University in Pittsburgh who led a federal study last
year that was critical of the polygraph. But "some of these things are
intriguing."

The 'trauma of deceit'

Before 9/11, Chance's lab was using high school students to study how
the brain responds to stress, including what Chance calls the "trauma
of deceit."

His chief tool was the "cognoscope," near-infrared light sensors
mounted on a Velcro headband that measured blood and oxygen flow in
students' brains when they were asked to lie.

Chance found that forming a lie produced a milliseconds-long burst of
bloodstream activity in the prefrontal cortex, the part of the brain
known as the center of decision-making.

"You could see the thought before it is articulated," he says.

After 9/11, the Defense Department became interested in Chance's work.

At the Polygraph Institute last year, 42 soldier volunteers tested the
cognoscope by answering questions about a staged crime. Some were
truthful; others were told to lie about their involvement. The device
correctly picked the liars but also recorded what polygraph examiners
call a "false positive" -- a soldier who was telling the truth but
whose infrared brain image indicated he was lying. Chance hopes
further research will show why.

If proved effective, the cognoscope would have advantages over the
polygraph. Polygraph results require interpretation; the cognoscope
gives an instant answer.

Can 'intent' be detected?

Chance is testing larger numbers of students. His goal: to understand
how "differences between individuals" could affect the cognoscope's
ability to detect deception. He wants to explore whether the
technology could detect not only deception, but also thoughts with
"malevolent intent."

"The more we can learn about what's in the other guy's mind," he says,
"the better off we'll be."

Chance acknowledges that he's troubled by the ethical questions raised
by a device that might be able to observe people's thoughts before
they realize they have had them.

He sought advice from Arthur Caplan, an ethicist who also is a Penn
professor. Caplan's verdict: Mind-screening would be OK at airports
and in government job interviews, situations in which those tested
would be presumed to have waived some privacy rights.

And beyond that?

"It's the great unknown," Chance says, "or at least one part of it."

In another lab at Penn, Daniel Langleben is using an MRI machine to
try to detect deception in a different area of the brain. His work is
privately funded, but the Polygraph Institute has approached him about
sharing his findings.

Based on his studies of addicts and learning-impaired children,
Langleben theorized that telling a lie requires two brain actions:
suppressing the truth, then concocting a falsehood. Finding evidence
of those tasks would be the sign of a deceptive brain, he thought.

In 2001, Langleben issued playing cards to 22 student volunteers and
told them to lie about which card they held. A scanner that places the
body in a magnetic field and measures tiny changes in the brain's
blood flow was aimed at areas of the brain's cortex that are linked to
thought suppression.

Langleben found that the areas that "lit up" during truth-telling also
were activated by lies. But other areas of the brain were particularly
active when subjects lied, which he says validated his theory.

Langleben thinks the MRI has an advantage over the polygraph because
it charts the lie, not the anxiety that might be caused by lying. But
the drawbacks are clear. MRI tests cost up to $1,500, three times more
than most polygraphs. MRI machines also are unwieldy and require a
willing participant. Slight movements by a test subject can nullify
the results.

"We think we're on to something," Langleben says, "but we're still
working out exactly what."

Linking heat and deception

In Rochester, Minn., in 2001, endocrinologist James Levine was
performing obesity research by using a thermal-imaging camera that
observes how much heat is thrown off when a person chews.

Suddenly, a large screen accidentally fell to the laboratory
floor. Levine's startled test subject yelled. Levine nearly did too
when he saw what the camera had recorded. White patches, indicating
unusual amounts of heat, suddenly had appeared around the camera's
image of the subject's eyes.

"We thought, 'Is this associated with other forms of stress, such as
deception?' " Levine recalls.

The Polygraph Institute was intrigued. It tested thermal imaging on 20
soldiers at Fort Jackson. Levine's camera identified six of the eight
who lied about taking part in a staged crime, and 11 of the 12 who
told the truth. One truth-teller was wrongly deemed a liar.

Thermal imaging has several potential advantages. Unlike polygraphs,
thermal imaging can be concealed and trained on an unwitting subject
up to 12 feet away. And the test also can be done quickly, making it
potentially useful for checking out travelers seeking to board
airliners, as well as would-be CIA agents.

Levine theorizes that the stress of lying leads a person to throw off
more heat. But the test sample was tiny, and much remains
unknown. He's planning more tests.

Focusing on words

Not all of the research involves high-tech machines.

Researchers at the University of Oklahoma are testing whether a liar's
false or incomplete statements contain clues that give him away. To
spot a lie, researchers look for words such as "maybe," "possibly" or
"to the best of my knowledge."

They say liars also typically offer facts that aren't relevant to an
interviewer's main question. An example: a suspect at the scene of a
robbery-homicide of a jeweler who describes the diamonds in their
display cases in detail, but who gives only a superficial description
of the victim's body.

With help from the Polygraph Institute, Oklahoma psychologists Shane
Connelly and Mike Mumford are testing more than 120 cues that could be
used to detect lies in security screenings. So far, cues developed by
the pair have been right about 75% of the time. But the method also
has done something the polygraph doesn't do well: reveal deception by
people who don't lie, but who hide the truth by leaving out key facts.

"A person doing the deceiving ... has to perform constant
self-monitoring and idea generation," Connelly says. "You have a lot
more to think about. It shows."

Analyzing voice stress

Some promising ideas have been rejected and sent back to the drawing
board. In St. Louis in 2001, researchers at Washington University
tested software designed to detect deception in a speaker based on
changes in stress levels in his voice. But it identified liars only
24% of the time.

In the mid-1990s, the CIA and FBI tested a device designed to identify
deception by using a halo-like helmet to measure brain waves triggered
when a subject saw something familiar. Both agencies decided it would
be difficult to adapt to interrogations.

So until something better comes along, agencies continue to use the
polygraph. Horvath says the government would drop the polygraph "in a
minute" if a more effective device were developed. Critics say it
should be dropped anyway.

"Is it better than nothing, or worse?" asks Steven Aftergood of the
Federation of American Scientists, which has criticized government
secrecy. "It's worse if it creates a false sense of security or
excludes qualified employees from government service. Any new
technology has to pass that test."
		



-- 
Mind Control: TT&P ==> http://www.datafilter.com/mc
Home page: http://www.datafilter.com/alb
Allen Barker




More information about the Neur-sci mailing list

Send comments to us at biosci-help [At] net.bio.net