[Nolug] CRYPTO-GRAM, May 15, 2005

From: Joey Kelly <joey_at_joeykelly.net>
Date: Sun, 15 May 2005 18:20:24 -0500
Message-Id: <200505151820.33839.joey@joeykelly.net>

---------- Forwarded Message ----------

Subject: CRYPTO-GRAM, May 15, 2005
Date: Sunday May 15 2005 06:22
From: Bruce Schneier <schneier@COUNTERPANE.COM>


                  May 15, 2005

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.

A free monthly newsletter providing summaries, analyses, insights, and
commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit

Or you can read this issue on the web at

Schneier also publishes these same essays in his blog:
<http://www.schneier.com/blog>. An RSS feed is available.

** *** ***** ******* *********** *************

In this issue:
      Blog: Schneier on Security
      REAL ID
      Should Terrorism be Reported in the News?
      New Risks of Automatic Speedtraps
      Crypto-Gram Reprints
      Detecting Nuclear Material in Transport
      The Potential for an SSH Worm
      Biometric Passports in the U.K.
      Lighters Banned on Airplanes
      Counterpane News
      Wi-Fi Minefields
      The PITAC Report on CyberSecurity
      State-Sponsored Identity Theft
      Combating Spam
      Comments from Readers

** *** ***** ******* *********** *************

           Blog: Schneier on Security

For eight months now, I have maintained a blog. It's basically the same
stuff you read in Crypto-Gram, only it comes out every day instead of once
a month. And I try to revise what I write there when I include it
here. Check it out if you're interested.


** *** ***** ******* *********** *************

                    REAL ID

The United States will get a national ID card. The REAL ID Act establishes
uniform standards for state driver's licenses, to go into effect in three
years, effectively creating a national ID card. It's a bad idea, and is
going to make us all less safe. It's also very expensive. And it all
happened without any serious debate in Congress.

I've already written about national IDs. I've written about the fallacies
of identification as a security tool. I'm not going to repeat myself here,
and I urge everyone who is interested to read those essays (links at the
end). Remember, the question to ask is not whether a national ID will do
any good; the question to ask is whether the good it does is worth the
cost. By that measure, a national ID is a lousy security trade-off. And
everyone needs to understand why.

Aside from the generalities in my previous essays, there are specifics
about REAL ID that make for bad security.

The REAL ID Act requires driver's licenses to include a "common
machine-readable technology." This will, of course, make identity theft
easier. Already some hotels take photocopies of your ID when you check in,
and some bars scan your ID when you try to buy a drink. Since the U.S. has
no data protection law, those businesses are free to resell that data to
data brokers like ChoicePoint and Acxiom. And they will; it would be bad
business not to. It actually doesn't matter how well the states and
federal government protect the data on driver's licenses, as there will be
parallel commercial databases with the same information.

(Those who point to European countries with national IDs need to pay
attention to this point. European countries have a strong legal framework
for data privacy and protection. This is why the American experience will
be very different than the European experience, and a much more serious
danger to society.)

Even worse, there's likely to be an RFID chip in these licenses. The same
specification for RFID chips embedded in passports includes details about
embedding RFID chips in driver's licenses. I expect the federal government
will require states to do this, with all of the associated security
problems (e.g., surreptitious access).

REAL ID requires that driver's licenses contain actual addresses, and no
post office boxes. There are no exceptions made for judges or police --
even undercover police officers. This seems like a major unnecessary
security risk.

REAL ID also prohibits states from issuing driver's licenses to illegal
aliens. This makes no sense, and will only result in these illegal aliens
driving without licenses -- which isn't going to help anyone's
security. (This is an interesting insecurity, and is a direct result of
trying to take a document that is a specific permission to drive an
automobile, and turning it into a general identification device.)

REAL ID is expensive. It's an unfunded mandate: the federal government is
forcing the states to spend their own money to comply with the act. I've
seen estimates that the cost to the states of complying with REAL ID will
be tens of billions. That's money that can't be spent on actual security.

And the wackiest thing is that none of this is required. In October 2004,
the Intelligence Reform and Terrorism Prevention Act of 2004 was signed
into law. That law included stronger security measures for driver's
licenses, the security measures recommended by the 9/11 Commission
Report. That's already done. It's already law.

REAL ID goes way beyond that. It's a huge power-grab by the federal
government over the states' systems for issuing driver's licenses.

REAL ID doesn't go into effect until three years after it becomes law, but
I expect things to be much worse by then. One of my fears is that this new
uniform driver's license will bring a new level of "show me your papers"
checks by the government. Already you can't fly without an ID, even though
no one has ever explained how that ID check makes airplane terrorism any
harder. I have previously written about Secure Flight, another lousy
security system that tries to match airline passengers against terrorist
watch lists. I've already heard rumblings about requiring states to check
identities against "government databases" before issuing driver's
licenses. I'm sure Secure Flight will be used for cruise ships, trains,
and possibly even subways. Combine REAL ID with Secure Flight and you have
an unprecedented system for broad surveillance of the population.

Is there anyone who would feel safer under this kind of police state?

Americans overwhelmingly reject national IDs in general, and there's an
enormous amount of opposition to the REAL ID Act.

If you haven't heard much about REAL ID in the newspapers, that's not an
accident. The politics of REAL ID was almost surreal. It was voted down
last fall, but was reintroduced and attached to legislation that funds
military actions in Iraq. This was a "must-pass" piece of legislation,
which means that there was no debate on REAL ID. No hearings, no debates
in committees, no debates on the floor. Nothing. And it's now law.

We're not defeated, though. REAL ID can be fought in other ways: via
funding, in the courts, etc. Those seriously interested in this issue are
invited to attend an EPIC-sponsored event in Washington, DC, on the topic
on June 6th. I'll be there.

Text of the REAL ID Act:

Congressional Research Services analysis:

My previous writings on identification and national IDs:

Security problems with RFIDs:

My previous writings on Secure Flight:


EPIC's Washington DC event:

** *** ***** ******* *********** *************

   Should Terrorism be Reported in the News?

In a New York Times op ed, columnist John Tierney argued that the media is
performing a public disservice by writing about all the suicide bombings in
Iraq. This only serves to scare people, he claimed, and serves the
terrorists' ends.

Some liberal bloggers have jumped on this op-ed as furthering the
administration's attempts to hide the horrors of the Iraqi war from the
American people, but I think the argument is more subtle than that. Before
you can figure out why Tierney is wrong, you need to understand that he has
a point.

Terrorism is a crime against the mind. The real target of a terrorist is
morale, and press coverage helps him achieve his goal. I wrote in Beyond
Fear (pages 242-3):

"Morale is the most significant terrorist target. By refusing to be
scared, by refusing to overreact, and by refusing to publicize terrorist
attacks endlessly in the media, we limit the effectiveness of terrorist
attacks. Through the long spate of IRA bombings in England and Northern
Ireland in the 1970s and 1980s, the press understood that the terrorists
wanted the British government to overreact, and praised their
restraint. The U.S. press demonstrated no such understanding in the months
after 9/11 and made it easier for the U.S. government to overreact."

Consider this thought experiment. If the press did not report the 9/11
attacks, if most people in the U.S. didn't know about them, then the
attacks wouldn't have been such a defining moment in our national
politics. If we lived 100 years ago, and people only read newspaper
articles and saw still photographs of the attacks, then people wouldn't
have had such an emotional reaction. If we lived 200 years ago and all we
had to go on was the written word and oral accounts, the emotional reaction
would be even less. Modern news coverage amplifies the terrorists' actions
by endlessly replaying them, with real video and sound, burning them into
the psyche of every viewer.

Just as the media's attention to 9/11 scared people into accepting
government overreactions like the PATRIOT Act, the media's attention to the
suicide bombings in Iraq are convincing people that Iraq is more dangerous
than it is.

Tiernan writes:

"I'm not advocating official censorship, but there's no reason the news
media can't reconsider their own fondness for covering suicide bombings. A
little restraint would give the public a more realistic view of the world's

"Just as New Yorkers came to be guided by crime statistics instead of the
mayhem on the evening news, people might begin to believe the statistics
showing that their odds of being killed by a terrorist are minuscule in
Iraq or anywhere else."

I pretty much said the same thing, albeit more generally, in Beyond Fear
(page 29):

"Modern mass media, specifically movies and TV news, has degraded our sense
of natural risk. We learn about risks, or we think we are learning, not by
directly experiencing the world around us and by seeing what happens to
others, but increasingly by getting our view of things through the
distorted lens of the media. Our experience is distilled for us, and it's
a skewed sample that plays havoc with our perceptions. Kids try stunts
they've seen performed by professional stuntmen on TV, never recognizing
the precautions the pros take. The five o'clock news doesn't truly reflect
the world we live in -- only a very few small and special parts of it.

"Slices of life with immediate visual impact get magnified; those with no
visual component, or that can't be immediately and viscerally comprehended,
get downplayed. Rarities and anomalies, like terrorism, are endlessly
discussed and debated, while common risks like heart disease, lung cancer,
diabetes, and suicide are minimized.

"The global reach of today's news further exacerbates this problem. If a
child is kidnapped in Salt Lake City during the summer, mothers all over
the country suddenly worry about the risk to their children. If there are
a few shark attacks in Florida -- and a graphic movie -- suddenly every
swimmer is worried. (More people are killed every year by pigs than by
sharks, which shows you how good we are at evaluating risk.)"

One of the things I routinely tell people is that if it's in the news,
don't worry about it. By definition, "news" means that it hardly ever
happens. If a risk is in the news, then it's probably not worth worrying
about. When something is no longer reported -- automobile deaths, domestic
violence -- when it's so common that it's not news, then you should start

Tierney is arguing his position as someone who thinks that the Bush
administration is doing a good job fighting terrorism, and that the media's
reporting of suicide bombings in Iraq are sapping Americans' will to
fight. I am looking at the same issue from the other side, as someone who
thinks that the media's reporting of terrorist attacks and threats has
increased public support for the Bush administration's draconian
counterterrorism laws and dangerous and damaging foreign and domestic
policies. If the media didn't report all of the administration's alerts
and warnings and arrests, we would have a much more sensible
counterterrorism policy in America and we would all be much safer.

So why is the argument wrong? It's wrong because the danger of not
reporting terrorist attacks is greater than the risk of continuing to
report them. Freedom of the press is a security measure. The only tool we
have to keep government honest is public disclosure. Once we start hiding
pieces of reality from the public -- either through legal censorship or
self-imposed "restraint" -- we end up with a government that acts based on
secrets. We end up with some sort of system that decides what the public
should or should not know.

Here's one example. Last year I argued that the constant stream of
terrorist alerts were a mechanism to keep Americans scared. This week, the
media reported that the Bush administration repeatedly raised the terror
threat level on flimsy evidence, against the recommendation of former DHS
secretary Tom Ridge. If the media follows this story, we will learn -- too
late for the 2004 election, but not too late for the future -- more about
the Bush administration's terrorist propaganda machine.

Freedom of the press -- the unfettered publishing of all the bad news --
isn't without dangers. But anything else is even more dangerous. That's
why Tierney is wrong.

And honestly, if anyone thinks they can get an accurate picture of anyplace
on the planet by reading news reports, they're sadly mistaken.

Tierney's essay:

Blog reactions:
05/05/10/media/index.html> or <http://tinyurl.com/b33e9>
or <http://tinyurl.com/cl5fj>

My essay on terror alerts:

Tom Ridge's comments:
<http://www.usatoday.com/news/washington/2005-05-10-ridge-alerts_x.htm> or

** *** ***** ******* *********** *************

        New Risks of Automatic Speedtraps

Every security system brings about new threats. Here's an example:

"The RAC Foundation yesterday called for an urgent review of the first
fixed motorway speed cameras.

"Far from improving drivers' behaviour, motorists are now bunching at high
speeds between junctions 14-18 on the M4 in Wiltshire, said Edmund King,
the foundation's executive director.

"The cameras were introduced by the Wiltshire and Swindon Safety Camera
Partnership in an attempt to reduce accidents on a stretch of the
motorway. But most motorists are now travelling at just under 79mph, the
speed at which they face being fined."

In response to automated speedtraps, drivers are adopting the obvious
tactic of driving just below the trigger speed for the cameras, presumably
on cruise control. So instead of cars on the road traveling at a spectrum
of speeds with reasonable gaps between them, we are seeing "pelotons" of
cars traveling closely bunched together at the same high speed, presenting
unfamiliar hazards to each other and to law-abiding slower road-users.

The result is that average speeds are going up, not down.

tml> or <http://tinyurl.com/7my9y>
AVCBQUJVC?xml=/news/2004/04/23/nspeed23.xml> or <http://tinyurl.com/7eoz9>

** *** ***** ******* *********** *************

              Crypto-Gram Reprints

Crypto-Gram is currently in its eighth year of publication. Back issues
cover a variety of security-related topics, and can all be found on
<http://www.schneier.com/crypto-gram.html>. These are a selection of
articles that appeared in this calendar month in other years.

Warrants as a Security Countermeasure

National Security Consumers

Encryption and Wiretapping

Unique E-Mail Addresses and Spam

Secrecy, Security, and Obscurity

Fun with Fingerprint Readers

What Military History Can Teach Network Security, Part 2

The Futility of Digital Copy Protection

Security Standards

Safe Personal Computing

Computer Security: Will we Ever Learn?

Trusted Client Software

The IL*VEYOU Virus (Title bowdlerized to foil automatic e-mail filters.)

The Internationalization of Cryptography

The British discovery of public-key cryptography

** *** ***** ******* *********** *************

     Detecting Nuclear Material in Transport

One of the few good things that's coming out of the U.S. terrorism policy
is some interesting scientific research. This paper discusses detecting
nuclear material in transport.

The authors believe that fixed detectors -- for example, at ports -- simply
won't work. Terrorists are more likely to use highly enriched uranium
(HEU), which is harder to detect, than plutonium. This difficulty of
detection is more based on its natural rate of reactivity than on some
technological hurdle. "The gamma rays and neutrons useful for detecting
shielded HEU permit detection only at short distances (2-4 feet or less)
and require that there be sufficient time to count a sufficient number of
particles (several minutes to hours)."

The authors conclude that the only way to reliably detect shielded HEU is
to build detectors into the transport vehicles. These detectors could take
hours to record any radioactivity.

Of course, for this system to work you have to assume that the terrorists
will use commercial shipping services to transport nuclear material.


** *** ***** ******* *********** *************

          The Potential for an SSH Worm

SSH, or secure shell, is the standard protocol for remotely accessing UNIX
systems. It's used everywhere: universities, laboratories, and
corporations (particularly in data-intensive back office services). Thanks
to SSH, administrators can stack hundreds of computers close together into
air-conditioned rooms and administer them from the comfort of their desks.

When a user's SSH client first establishes a connection to a remote server,
it stores the name of the server and its public key in a known_hosts
database. This database of names and keys allows the client to more easily
identify the server in the future.

There are risks to this database, though. If an attacker compromises the
user's account, the database can be used as a hit-list of follow-on
targets. And if the attacker knows the username, password, and key
credentials of the user, these follow-on targets are likely to accept them
as well.

A new paper from MIT explores the potential for a worm to use this
infection mechanism to propagate across the Internet. Already attackers
are exploiting this database after cracking passwords. The paper also
warns that a worm that spreads via SSH is likely to evade detection by the
bulk of techniques currently coming out of the worm detection community.

While a worm of this type has not been seen since the first Internet worm
of 1988, attacks have been growing in sophistication and most of the tools
required are already in use by attackers. It's only a matter of time
before someone writes a worm like this.

This is an easy one to fix, though. One of the countermeasures proposed in
the paper is to store hashes of host names in the database, rather than the
names themselves. This is similar to the way hashes of passwords are
stored in password databases, so that security need not rely entirely on
the secrecy of the database. It solves the security problem with no loss
of functionality to the user.

The authors of the paper have worked with the open source community, and
version 4.0 of OpenSSH has the option of hashing the known-hosts
database. There is also a patch for OpenSSH 3.9 that does the same
thing. Unfortunately, the option is not turned on by default.


The fix:
ntent-type=text/x-cvsweb-markup> or <http://tinyurl.com/8938c>

** *** ***** ******* *********** *************


License-plate scanning by helicopter:
This is an example of wholesale surveillance, and something I've written
about before.
Of course, once the system is in place, it will be used for privacy
violations that we can't even conceive of. The only way to maintain
security is not to field this sort of system in the first place.

A revision of the excellent paper by Daniel Solove and Chris Hoofnagle that
gave specific legislative proposals for privacy reform.

"A Taxonomy of Privacy," by Daniel Solove. Really good work.

More failures in airport screening:
< http://www.cnn.com/2005/TRAVEL/04/16/airport.screeners.ap/>
My commentary on this is here:

The Department of Homeland Security is evaluating three different systems
to process exit visas.
Properly evaluating this trade-off would look at the relative ease of
attacking the three systems, the relative costs of the three systems, and
the relative speed and convenience -- to the traveler -- of the three
systems. My guess is that the system that requires the least amount of
interaction with a person when boarding the plane is best.

Interesting law review article on the liabilities of having an open
wireless network:

Universal automobile surveillance comes to the United Arab Emirates:
This kind of thing is also being implemented in the UK for insurance
e.html> or <http://tinyurl.com/6wmob>

A really good essay on security trade-offs by an anonymous CSO:

Two penguins going through airport security:
or <http://tinyurl.com/aju23>

Ants staging ambushes:

The U.S. State Department is considering implementing its RFID passport in
such a way as to require a master key from a reader before the passport
broadcasts any of its details. The devil is in the details, but this is an
excellent idea.

"The Emergence of a Global Infrastructure for Mass Registration and
Surveillance": a really interesting report.

It's an old story: users disable a security measure because it's annoying,
allowing an attacker to bypass the measure. "A rape defendant accused in a
deadly courthouse rampage was able to enter the chambers of the judge slain
in the attack and hold the occupants hostage because the door was unlocked
and a buzzer entry system was not activated, a sheriff's report
says." Security doesn't work unless the users want it to work. This is
true on the personal and national scale, with or without technology.

Yet another PDF redacting failure: this one regarding classified material
in a U.S. report about the shooting of Italian secret agent Nicola Calipari
in Iraq.
502/ap_on_re_eu/italy_us_iraq> or <http://tinyurl.com/cq7y2>

Nice essay about the implications of the ChoicePoint data theft (and all
the other data thefts, losses, and disclosures making headlines).

The U.S. government is considering another chief cybersecurity position,
this one at the Department of Homeland Security. Sadly, this isn't going
to amount to anything. Yes, it's good to have a higher-level official in
charge of cybersecurity. But responsibility without authority doesn't
work. A bigger bully pulpit isn't going to help without a coherent plan
behind it, and we have none. The absolute best thing the DHS could do for
cybersecurity would be to coordinate the U.S. government's enormous
purchasing power and demand more secure hardware and software.

Nice essay on identity theft:

Company continues bad information security practices:
My commentary:

The Onion takes on identity theft:

** *** ***** ******* *********** *************

         Biometric Passports in the U.K.

The UK government tried, and failed, to get a national ID. Now they're
adding biometrics to their passports. According to the report: "Financing
for the Passport Office is planned to rise from 182 million a year to 415
million a year by 2008 to cope with the introduction of biometric
information such as fingerprints. A Home Office spokesman said the aim was
to cut out the 1,500 fraudulent applications found through the postal
system last year alone."

Okay, let's do the math. Eliminating 1,500 instances of fraud will cost
233 million a year. That comes to 155,000 per instance of fraud.

Does this kind of security trade-off make sense to anyone? Is there
absolutely nothing better the UK government can do to ensure security and
safety with 233 million a year?

Yes, adding additional biometrics to passports -- there's already a picture
-- will make them more secure. But I don't think that the additional
security is worth the money and the additional risks. It's a bad security

AVCBQ0JVC?xml=/news/2005/04/06/nelec506.xml> or <http://tinyurl.com/bfsms>
heet=/portal/2005/04/13/ixportal.html> or <http://tinyurl.com/akccz>

** *** ***** ******* *********** *************

           Lighters Banned on Airplanes

Lighters are now banned on U.S. commercial flights, but not matches.

The senators who proposed the bill point to Richard Reid, who
unsuccessfully tried to light explosives on an airplane with matches. They
were worried that a lighter might have worked.

That, of course, is silly. The reason Reid failed is because he tried to
light the explosives in his seat, so he could watch the faces of those
around him. If he'd gone into the lavatory and lit them in private, he
would have been successful.

Hence, the ban is silly.

But there's a serious problem here. Airport security screeners are much
better at detecting explosives when the detonation mechanism is
attached. Explosives without any detonation mechanism -- like Richard
Reid's -- are much harder to detect. As are explosives carried by one
person and a detonation device carried by another. I've heard that this
was the technique the Chechnyan women used to blow up a Russian airplane.


** *** ***** ******* *********** *************

                Counterpane News

Counterpane is offering managed DDOS protection, in alliance with Prolexic:

Schneier and Doug Howard, also of Counterpane, are speaking at the Gartner
IT Security Summit in Washington DC on June 6th:

Schneier is speaking at the Seoul Digital Forum on May 20th:

Schneier is speaking at AusCERT, somewhere near Brisbane, on May 23rd:

Schneier is speaking at Corporate Security 2005 in Helsinki on May 26th:

Schneier is speaking at the EPIC conference titled "National ID at the
Crossroads" in Washington, DC, on June 6th:

This is an interview with me from SecurityFocus:

I was recently interviewed on ITConversations:

And last month, my encryption algorithm Blowfish was mentioned on the Fox
show "24." An alleged computer expert from the fictional anti-terror
agency CTU was trying to retrieve some files from a terrorist's
laptop. This is the exchange between the agent and the terrorist's

        "They used Blowfish algorithm."

        "How can you tell?"

        "By the tab on the file headers."

        "Can you decrypt it?"

        "CTU has a proprietary algorithm. It shouldn't take that long. We'll
start by trying to hack the password. Let's start with the basics. Write
down nicknames, birthdays, pets -- anything you think he might have used."

** *** ***** ******* *********** *************

                 Wi-Fi Minefields

The U.S. is laying a minefield in Iraq that can be controlled by a soldier
with a wi-fi-enabled laptop.

Put aside arguments about the ethics and efficacy of landmines. Assume they
exist and are being used. Given that, the question is whether
radio-controlled landmines are better or worse than regular landmines. This
comment, for example, seems to get it wrong: "'We're concerned the United
States is going to field something that has the capability of taking the
man out of the loop when engaging the target, ' said senior researcher Mark
Hiznay of Human Rights Watch. 'Or that we're putting a 19-year-old soldier
in the position of pushing a button when a blip shows up on a computer
screen. '"

With conventional landmines, the man is out of the loop as soon as he lays
the mine. Even a 19-year-old seeing a blip on a computer screen is better
than a completely automatic system.

Were I the U.S. military, I would be more worried whether the mines could
accidentally be triggered by radio interference. I would be more worried
about the enemy jamming the radio control mechanism.


** *** ***** ******* *********** *************

        The PITAC Report on CyberSecurity

I finally got around to reading the President's Information Technology
Advisory Committee (PITAC) report entitled "Cyber Security: A Crisis of
Prioritization" (dated February 2005). The report looks at the current
state of federal involvement in cybersecurity research, and makes
recommendations for the future. It's a good report, and one which the
administration would do well to listen to.

The report's recommendations are based on two observations. The
observations are that 1) cybersecurity research is primarily focused on
current threats, and not long-term threats, and 2) there simply aren't
enough cybersecurity researchers, and no good mechanism for producing
them. The federal government isn't doing enough to foster cybersecurity
research, and the effects of this shortfall will be felt more in the long
term than the short term.

To remedy this problem, the report makes four specific recommendations (in
much more detail than I summarize here). One, the government needs to
increase funding for basic cybersecurity research. Two, the government
needs to increase the number of researchers working in
cybersecurity. Three, the government need to better foster the transfer of
technology from research to product development. And four, the government
needs to improve its own cybersecurity coordination and oversight. Four
good recommendations.

More specifically, the report lists ten technologies that need more
research. They are (not in any priority order):

        Authentication Technologies
        Secure Fundamental Protocols
        Secure Software Engineering and Software Assurance
        Holistic System Security
        Monitoring and Detection
        Mitigation and Recovery Methodologies
        Cyber Forensics
        Modeling and Testbeds for New Technologies
        Metrics, Benchmarks, and Best Practices
        Non-Technology Issues that Can Compromise Cyber Security

It's a good list, and I am especially pleased to see the tenth item -- one
that is usually forgotten. I would add something on the order of "Dynamic
Cyber Security Systems" -- I think we need serious basic research in how
systems should react to new threats and how to update the security of
already fielded systems -- but that's all I would change.

The report itself is a bit repetitive, but it's definitely worth skimming.

or <http://tinyurl.com/79vj6>

** *** ***** ******* *********** *************

         State-Sponsored Identity Theft

In an Ohio sting operation at a strip bar, a 22-year-old student intern
with the United States Marshals Service was given a fake identity so she
could work undercover at the club. But instead of giving her a fabricated
identity, the police gave her the identity of another woman living in
another Ohio city. And they didn't tell the other woman.

Oddly enough, this is legal. According to Ohio's identity theft law, the
police are allowed to do it. Identity theft cannot be prosecuted if: "The
person or entity using the personal identifying information is a law
enforcement agency, authorized fraud personnel, or a representative of or
attorney for a law enforcement agency or authorized fraud personnel and is
using the personal identifying information in a bona fide investigation, an
information security evaluation, a pretext calling evaluation, or a similar

I have to admit that I'm stunned. I naively assumed that the police would
have a list of Social Security numbers that would never be given to real
people, numbers that could be used for purposes such as this. Or at least
that they would use identities of people from other parts of the country
after asking for permission. (I'm sure people would volunteer to help out
the police.) It never occurred to me that they would steal the identity of
random citizens. What could they be thinking?


The Ohio law:

** *** ***** ******* *********** *************

                 Combating Spam

Spam is back in the news, and it has a new name. This time it's
voice-over-IP spam, and it has the clever name of "spit" (spam over
Internet telephony). Spit has the potential to completely ruin VoIP. No
one is going to install the system if they're going to get dozens of calls
a day from audio spammers. Or, at least, they're only going to accept
phone calls from a white list of previously known callers.

VoIP spam joins the ranks of e-mail spam, Usenet newsgroup spam, instant
message spam, cell phone text message spam, and blog comment spam. And, if
you think broadly enough, these computer-network spam delivery mechanisms
join the ranks of computer telemarketing (phone spam), junk mail (paper
spam), billboards (visual space spam), and cars driving through town with
megaphones (audio spam). It's all basically the same thing -- unsolicited
marketing messages -- and only by understanding the problem at this level
of generality can we discuss solutions.

In general, the goal of advertising is to influence people. Usually it's
to influence people to purchase a product, but it could just as easily be
to influence people to support a particular political candidate or
position. Advertising does this by implanting a marketing message into the
brain of the recipient. The mechanism of implantation is simply a tactic.

Tactics for unsolicited marketing messages rise and fall in popularity
based on their cost and benefit. If the benefit is significant, people are
willing to spend more. If the benefit is small, people will only do it if
it is cheap. A 30-second prime-time television ad costs 1.8 cents per
adult viewer, a full-page color magazine ad about 0.9 cents per reader. A
highway billboard costs 0.21 cents per car. Direct mail is the most
expensive, at over 50 cents per third-class letter mailed. (That's why
targeted mailing lists are so valuable; they increase the per-piece benefit.)

Spam is such a common tactic not because it's particularly effective; the
response rates for spam are very low. It's common because it's
ridiculously cheap. Typically, spammers charge less than a hundredth of a
cent per e-mail. (And that number is just what spamming houses charge
their customers to deliver spam; if you're a clever hacker, you can build
your own spam network for much less money.) If it is worth $10 for you to
successfully influence one person -- to buy your product, vote for your
guy, whatever -- then you only need a 1 in a 100,000 success rate. You can
market really marginal products with spam.

So far, so good. But the cost/benefit calculation is missing a component:
the "cost" of annoying people. Everyone who is not influenced by the
marketing message is annoyed to some degree. The advertiser pays a partial
cost for annoying people; they might boycott his product. But most of the
time he does not, and the cost of the advertising is paid by the person:
the beauty of the landscape is ruined by the billboard, dinner is disrupted
by a telemarketer, spam costs money to ship around the Internet and time to
wade through, etc. (Note that I am using "cost" very generally here, and
not just monetarily. Time and happiness are both costs.)

This is why spam is so bad. For each e-mail, the spammer pays a cost and
receives benefit. But there is an additional cost paid by the e-mail
recipient. Because so much spam is unwanted, that additional cost is huge
-- and it's a cost that the spammer never sees. If spammers could be made
to bear the total cost of spam, then its level would be more along the
lines of what society would find acceptable.

This economic analysis is important, because it's the only way to
understand how effective different solutions will be. This is an economic
problem, and the solutions need to change the fundamental economics. (The
analysis is largely the same for VoIP spam, Usenet newsgroup spam, blog
comment spam, and so on.)

The best solutions raise the cost of spam. Spam filters raise the cost by
increasing the amount of spam that someone needs to send before someone
will read it. If 99% of all spam is filtered into trash, then sending spam
becomes 100 times more expensive. This is also the idea behind white lists
-- lists of senders a user is willing to accept e-mail from -- and
blacklists: lists of senders a user is not willing to accept e-mail from.

Filtering doesn't just have to be at the recipient's e-mail. It can be
implemented within the network to clean up spam, or at the sender. Several
ISPs are already filtering outgoing e-mail for spam, and the trend will

Anti-spam laws raise the cost of spam to an intolerable level; no one wants
to go to jail for spamming. We've already seen some convictions in the
U.S. Unfortunately, this only works when the spammer is within the reach
of the law, and is less effective against criminals who are using spam as a
mechanism to commit fraud.

Other proposed solutions try to impose direct costs on e-mail senders. I
have seen proposals for e-mail "postage," either for every e-mail sent or
for every e-mail above a reasonable threshold. I have seen proposals where
the sender of an e-mail posts a small bond, which the receiver can cash if
the e-mail is spam. There are other proposals that involve "computational
puzzles": time-consuming tasks the sender's computer must perform,
unnoticeable to someone who is sending e-mail normally, but too much for
someone sending e-mail in bulk. These solutions generally involve
re-engineering the Internet, something that is not done lightly, and hence
are in the discussion stages only.

All of these solutions work to a degree, and we end up with an arms
race. Anti-spam products block a certain type of spam. Spammers invent a
tactic that gets around those products. Then the products block that
spam. Then the spammers invent yet another type of spam. And so on.

Blacklisting spammer sites forced the spammers to disguise the origin of
spam e-mail. People recognizing e-mail from people they knew, and other
anti-spam measures, forced spammers to hack into innocent machines and use
them as launching pads. Scanning millions of e-mails looking for identical
bulk spam forced spammers to individualize each spam message. Semantic
spam detection forced spammers to design even more clever spam. And so
on. Each defense is met with yet another attack, and each attack is met
with yet another defense.

Remember that when you think about host identification, or postage, as an
anti-spam measure. Spammers don't care about tactics; they want to send
their e-mail. Techniques like this will simply force spammers to rely more
on hacked innocent machines. As long as the underlying computers are
insecure, we can't prevent spammers from sending.

This is the problem with another potential solution: re-engineering the
Internet to prohibit the forging of e-mail headers. This would make it
easier for spam detection software to detect spamming IP addresses, but
spammers would just use hacked machines instead of their own computers.

Honestly, there's no end in sight for the spam arms race. Currently about
80 - 90% of email is spam, and that percentage is rising. I am continually
battling with comment spam in my blog. But even with all that, spam is one
of computer security's success stories. The current crop of anti-spam
products work pretty well, if people are willing to do the work to tune
them. I get almost no spam, and very few legitimate e-mails end up in my
spam trap. I wish they would work better -- Crypto-Gram is occasionally
classified as spam by one service or another, for example -- but they're
working pretty well. It'll be a long time before spam stops clogging up
the Internet, but at least there are technologies to ensure that we don't
have to look at it.

** *** ***** ******* *********** *************

              Comments from Readers

From: Keith Martin <keith@keith.gs>
Subject: Mitigating Identity Theft

In Europe (and in Ireland in particular) we have extensive rules for
dealing with what you refer to as "open[ing] a credit card account by
simply filling out a bunch of information on a form". There's a legal
requirement on any bank or credit card company in Ireland to verify the
identity of any applicant for a bank account or a credit card using (at
least) two separate and different methods.

For example, one is usually a photo ID - passport or driver's license are
the most usual (most Irish people would have a passport, but I'm not sure
that option would work in the US), and the other is a proof of address
(e.g., a telephone or other utility bill). It's quite possible to get one
or the other, but it would be difficult to get both. Also, the utility
bill has to be no more than six weeks old, so the possibility for using old
addresses or fake addresses is limited (although not entirely mitigated).

It's not 100% secure, but it's better than some of the systems used in
other countries. The legislation was originally introduced to deter money
laundering, but had a useful duplicate purpose, which I know (having asked
them!) the legislators hadn't intended at the start.

From: Charles H Baker <chb@charleshbaker.com>
Subject: Mitigating Identity Theft

One thing I would like to bring to your attention is that once FACTA goes
into effect on June 1st, consumers will become responsible for the
fraudulent charges if they don't notify the financial institution within
60, or in some cases 30, days. This is very problematic because most
victims don't become aware that they are victims until more than year has
passed, FTC numbers.

In addition the FTC says that only 26% of ID theft is credit or financial
in nature. The rest is healthcare fraud, tax fraud, etc. How would
validating the transaction help if someone uses my Social Security number
to get a job, and then doesn't pay any taxes? The IRS is going to come
looking for me!

From: Andrew Blank <andrew.blank@wanadoo.nl>
Subject: Mitigating Identity Theft

You make the point in recent Crypto-Grams that transaction authentication,
rather than user authentication, is the key point in financial
transactions. The Dutch agree with you. Here's what we've been doing in
Holland for the last few years with internet banking.

1. Bank customers have ATM cards with a chip that holds their PIN.

2. Internet banking customers have a challenge-response calculator (a
token). The calculator is not unique to the individual, but every
calculator must be unlocked by inserting the ATM card and entering the
PIN. This personalizes the calculator to the user, as long as the ATM card
is inserted. Once unlocked, the calculator will go to sleep after a few
minutes and require the PIN to be re-entered to wake it up.

3. Users login to internet banking on the web by providing their bank
account number and the serial number (not the PIN, of course) of their ATM
card. The bank provides a challenge (8 digits) which the user enters into
the calculator and replies with the computed 6-digit response.

4. At this point the user has access to the account; he can prepare -- but
not send -- payments. Typically the user pays accounts that are in his
bank address book. Each new account entry to the address book generates a
challenge-response from the bank (and that probably also means the user has
to re-enter his PIN to wake up his calculator, too). If a user makes a
large payment to an account that isn't in the address book, then there is
also a challenge-response required to validate the receiving account details.

5. Finally, when all payments have been queued, the user selects "Send to
bank". A list of all the queued transactions (payees and amount to be
paid) is displayed and a final challenge-response is required before the
batch of payments is sent.

This system isn't perfect, but it seems pretty good. It gives any
man-in-the-middle a real difficulty to invent a payment and somehow
convince the user to authenticate a transaction that he didn't
originate. There is obviously some extra work for the user, but in return
he gets pretty good assurance that he, and not some stranger, is in charge
of his money.

From: Andy Clark <andy.clark@dial.pipex.co.uk>
Subject: Mitigating Identity Theft

With regards to the liability for credit card fraud, this is changing in
the UK with the introduction of the chip and PIN system. When we make
transactions, the card has to be inserted into a device and a PIN entered
for the transaction. The good point of this is that the card does not
leave the person who is making the transaction; the charging devices are
commonly brought to the table of the restaurant, for example.

In addition to this, most of the card companies are changing their terms
and conditions to shift the liability onto the card holder and also to give
them the responsibility of keeping their card and PIN safe. Historically,
if someone had a card stolen and reported it an hour later, they were only
liable for the first $50; now they are liable for all transactions made in
that hour.

For example, see some UK card terms and conditions:


From: laszlo@hars.us
Subject: Mitigating Identity Theft

In the Eighties and early Nineties I lived in Germany. The bank system
there was far more advanced than what we have in the US now. All of my
utility, subscription, and insurance bills were automatically deducted from
my account (after my one-time written authorization) and I had six weeks to
cancel any deduction for any reason. I did not have to write checks; any
criminal activity would have been far more suspicious. The use of
chipcards as cashcards and my VISA card showing my photograph on the front
side were also little additions to the security.

Almost 20 years ago the security was higher there than it is now in the US.
My bank handed me a list of transaction authentication numbers (TANs), each
to be used only once. For online banking I had to authenticate myself with
the usual username/pass-phrase combination and also had to provide the next
transaction number from the printed list. No malicious software could get
into the drawer of my desk to get the list. Even secretly making a
photocopy of the list was of limited use, because I would notice it at
typing in my next transaction number. Online spoofing the TAN, or MitM
attacks could allow a malicious person to change one single transaction,
but it would be immediately apparent to the legitimate user: his own
transaction fails or does not produce the expected account balance. A
telephone call would prevent any damage.

The problem in the US is that there is so much competition for new
customers that even their tiniest inconvenience, like typing in a
transaction number and marking it used, may result in losing a few
customers. It is simple math: if a very easy-to-use banking system
attracts more customers, and the resulting extra profit is more than what
the bank is expected to lose on fraud, then the insecure, simple system
will be used. Especially if banks could push the loss to the affected
customers or merchants. As you say, the solution is making the financial
institutions liable for fraudulent transactions.

From: John <atfdjsj02@sneakemail.com>
Subject: Mitigating Identity Theft

I am one of the senior technical architects on the point of sale team of a
national retail chain, and I can assure you that I am intimately familiar
with the credit authorization and settlement processes.

The way credit works is if we get a positive credit authorization from Visa
(that "auth code" you see printed on your receipts is evidence of that),
then Visa has assumed liability for the transaction, and we do get paid
through a process called settlement.

There are many network links to take an authorization request from POS to
the issuing Visa bank (and back again.) Nothing is perfect, and credit
authorization systems sometimes go offline. In that case we have our call
center process authorization requests over the phone (normally they handle
account questions, billing and/or dunning, or authorization calls for cards
that may require additional processing.) But this phone processing is very
expensive in terms of cashier time and customer frustration, not to mention
the additional load placed on the call center staff, so we have what is
called a "floor limit" -- any offline charge below this limit is
automatically approved by our corporation. That means we have assumed
liability for that charge.

Typically the floor limit is irrelevant -- we're online to credit far more
than 99% of the time. But when we do go offline, the amount of that limit
serves to act as a throttle to the call center. If we have the limit set
at $1.00, we might get a thousand phone calls a minute. And if we set it
to $10,000.00, we might get one call an hour. So we vary that limit based
on the risk we're willing to assume vs. the capacity of our call center to
process calls in an offline situation.

If we assume liability for a transaction in the case where we were offline
to Visa, and there is a subsequent problem with the transaction (the
customer complains of fraudulent usage, or otherwise refuses to pay) then
Visa issues a "chargeback" to us, and we eat that loss. No auth code, no
payment. Needless to say, it's considered very important that we keep the
systems online to avoid this risk.

It is also important to keep the current value of the floor limit secret
because news of system failures spreads rapidly amongst criminals; people
with forged or fraudulent cards or cards for closed or delinquent accounts
descend upon our stores in droves if they think we're offline. Knowledge
that they can safely spend $7.99 with impunity vs. getting declined for an
$8.00 charge leads to a lot of little fraudulent transactions. The problem
isn't as dramatic in an intermittent or transient failure mode, but in a
disaster scenario (such as after the Florida hurricanes) we get taken
advantage of quickly. After restoring power and some telephone service (a
working cell phone is considered adequate), enough network bandwidth for
online credit authorizations tops the priority list for restoration.

Also, we're not the only link in the authorization chain. For example, we
do not have direct lines to every Visa member bank. We use a third-party
consolidator service to act as our gateway into the Visa network. And they
also employ floor limits to control the volume in their systems as
well. If they stand in for Visa authorization, then we can reassign our
chargebacks to them, since they are the ones liable to us for any
fraudulent charges they approve.

The same system even scales to Mom & Pop's store, where they have a
Verifone credit authorizing terminal. If they get an auth code from their
Visa authorizing service, then they get paid. If they take your card on an
old-fashioned imprinter and do not make a phone call, they won't get paid
for a fraudulent charge. But if they call and write the auth code on the
carbonless slip and make an imprint of the card to verify its presence,
they do get paid. It's in their contract.

For the most part, the credit companies eat the losses. That's one of the
reasons why they charge exorbitant interest rates that are far over prime
-- to cover their risk.

From: Anton Holzherr <anton@holzherr.ch>
Subject: Mitigating Identity Theft

Transaction authentication (or lack of it) is not just an e-commerce
problem. In Switzerland there have been newspaper reports of abuses of the
banks payment systems where payments made by bank customers via snail-mail
have been redirected illicitly to third-party accounts. See for example:


In Switzerland, payments of Bills are not executed like in the U.S. by
sending a check to cover a creditor's claim for payment. It works the
other way around. Each creditor sends you, together with his bill, a
deposit slip which contains his bank account details and a reference
number. Using this information, the bank customer issues a payment order
to the bank by going to the bank counter, using a secure transaction over
the internet, or by sending a payment order via snail mail.

As a rule, the snail mail payment system uses authentication only for the
total sum of all the transactions contained in one payment batch. The way
it works, at the end of the month, Joe Bloggs collects all his creditors'
payment slips, adds up the total of all transaction requests, fills in a
lump sum payment order for the bank containing this total, signs by hand
and sends this payment order together with all the payment slips to the
bank in a sealed envelope.

What has been happening is that thieves steal these (paper) payment orders
in the middle of the night. Using duplicated keys, slings or sticky tape,
they fish out the posted letters out of the outgoing post boxes. Then they
substitute their own deposit slips, making sure the total matches, and thus
divert the money to their own accounts. The bank customer only finds out
that he has been taken for a ride when he receives his bank statement at
the end of the month and discovers some other person, not his creditors,
have obtained the money.

This scam works because the banks only require a valid legal signature
authenticating the total amount, not one for each transaction processed.

What the newspaper article does not mention is how the thieves, who divert
the money into their own accounts, manage to stay anonymous.

From: Joseph K Huffman <Joseph.Huffman@pnl.gov>
Subject: Lighters Banned on Airplanes

One of my hobbies is explosives. I have a ATFE license to manufacture high
explosives. I do so recreationally on a fairly regular basis.

I made the explosives for a recent event wearing gloves. Then had to
rework some things later and did that without gloves. A few minutes later
I handled a rifle case without cleaning up. On April 13th, three days
later, that same rifle case went through airport security at Pasco
Washington. I watched a TSA agent wipe down the handle and interior of the
case and test them for explosives. Everything passed. The rifle case went
with me to Albuquerque, New Mexico. On April 16th, that same rifle case
made the return trip and again went through a TSA screening without
questions. I have numerous stories of this nature. This is only the most

As near as I can determine, airport "security", from one end to the other,
only exists to make people feel better. It does not represent a deterrent
to even a moderately skilled adversary. We are wasting something like $1.8
billion per year on this activity to make some people feel better.

From: "Mike Glendinning" <mikeg@dulciana.com>
Subject: Two-Channel Authentication with Cell Phones and SMS

In the March Crypto-Gram, you write about the use by a bank of a
"two-channel" authentication mechanism involving cell phones and SMS. The
technique is given further endorsement in the April issue by the follow-up
from Jonathan Tuliani.

I must however raise a word of caution. As a consultant to the telecoms
industry, I have designed several systems using this technique in the past,
but believe it is rapidly becoming much less useful. The technique makes
the assumption that the cellular network is closed, well-controlled, and in
particular envelops both the originator of the message (e.g., the bank) and
the user's cell phone. But three technological trends in the telecoms
industry mean this assumption no longer holds true:

1) The cellular industry is moving away from the use of proprietary
network-level protocols for the delivery of services such as SMS. For
example, the newer Multimedia Messaging Service (MMS) is based on open
Internet protocols such as HTTP. The knowledge needed for the creation and
spoofing of messages is therefore becoming much more widespread.

2) The closed and secure networks offered by the telcos are being opened up
and interconnected with the public Internet to offer the "wireless web"
experience as well as third-party messaging services. Therefore, these
networks no longer represent a completely separate and independent channel
to the Internet. The origination of messages is becoming easier as it no
longer requires a specialised and dedicated network connection to the telco.

3) Older "dumb" handsets where the software is completely controlled by the
manufacturer and network operator are being replaced with "smart" devices
that are fully programmable by the end user. There are now many
possibilities for trojan and man-in-the-middle attacks from rogue
applications running on the cell phone itself. For example, with
smartphones using the Symbian operating system (and to a lesser extent
Java/J2ME) it is possible for applications to intercept all incoming SMS
messages as well as have full control over the user interface.

As you can see, it is simultaneously becoming easier to inject false
messages into the "two-channel" authentication mechanism as well as to
intercept valid ones. Unfortunately, I find that these issues are not very
well understood by many in the telecoms industry, nor by those who rely on
this technology for the purposes of user authentication .

The lesson is, I suppose, that it's important to understand clearly all the
assumptions on which any security mechanism is based. And that these
assumptions must be continuously re-evaluated in the light of a changing

** *** ***** ******* *********** *************

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on security: computer and otherwise. You can
subscribe, unsubscribe, or change your address on the Web at
<http://www.schneier.com/crypto-gram.html>. Back issues are also available
at that URL.

Comments on CRYPTO-GRAM should be sent to
schneier@counterpane.com. Permission to print comments is assumed unless
otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will
find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as
it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the
best sellers "Beyond Fear," "Secrets and Lies," and "Applied
Cryptography," and an inventor of the Blowfish and Twofish algorithms. He
is founder and CTO of Counterpane Internet Security Inc., and is a member
of the Advisory Board of the Electronic Privacy Information Center
(EPIC). He is a frequent writer and lecturer on security topics. See

Counterpane is the world's leading protector of networked information - the
inventor of outsourced security monitoring and the foremost authority on
effective mitigation of emerging IT threats. Counterpane protects networks
for Fortune 1000 companies and governments world-wide. See

Crypto-Gram is a personal newsletter. Opinions expressed are not
necessarily those of Counterpane Internet Security, Inc.

Copyright (c) 2005 by Bruce Schneier.


Joey Kelly
< Minister of the Gospel | Linux Consultant >
"I may have invented it, but Bill made it famous."
 --- David Bradley, the IBM employee that invented CTRL-ALT-DEL

Nolug mailing list

Received on 05/15/05

This archive was generated by hypermail 2.2.0 : 12/19/08 EST