The reg picked the story up here:
>> They shone a light on it. Physical access of a sort, true...
>Well, I'm not sure what you mean by, "of a sort." You can't shine a
>light on something if you don't have physical access to it. In terms of
>smart cards and phones and suchlike, if you can shine a light on it, you
>could also take it apart, or do anything else you wanted to do to it.
NOPE! This just ain't true. If your keitai is in your sweaty paw and my
torch is in my sweaty paw, I can shine a light on your keitai without
having physical access. To be less literal, if I can make bits flip by
aiming a focussed beam of - whatever - at your keitai on your desk in
front of you, then I have only "physical access of a sort". The same sort
of physical access Plod-kun [the police] has to a car's engine management
system when they shut it down remotely.
>So if just waiting for the sun or a cosmic ray shower would work with
>people's keitais, people's keitais would be malfunctioning when on sunny
I know there was some concern a while ago that keitai's could interefere
with the operation of cpus. I don't know how much there was in it. (and
presumably the cpus in keitai are not affected by the normal operation of
keitai). But as dies get smaller on chips and speeds increase, ambient
radiation effects may well start to appear. How would we know if they did?
>So I don't think we have to worry about malicious Java programs
>violating the JVM security model on cards and phones to which that the
>attacker does not have physical access.
Well - 1) there are a lot of keitai's so even unlikely events are
guaranteed to happen sometimes - possibly quite often. 2) the physical
access required may be only very attenuated - as in the police example
>> >And this is certainly not a bug in the java sandbox;
>> Nowhere did they claim it was.
>The first sentence of the article is:
> A Princeton University student has shed light on security flaws in
> Java and .Net virtual machines....
>There's no question in my mind that this is patently false. This is
>not a security flaw in either of the virtual machines; it's a security
>flaw in the hardware.
Hardware does not have security flaws. Nor does software. Systems have
security flaws, and this is a flaw in the system, of which the software is
a part. It isn't so much false, as lazy.
>Saying it's a flaw in the VM is like saying that
>someone's front door was insecure when the burglar came in through an
>unlocked back window.
The security flaw is in trusting the instantiated VM.
>Indeed the system was compromised, but this was not due to any error or
>problem with the VM.
As soon as you flipped the bit, you changed the VM. So the VM that you say
does not have the problem is a different VM to the VM that we are talking
about - the flipped bit VM.
The flipped bit VM (which is the one under discussion) CERTAINLY has a
problem. The VM you are talking about is some kind of ideal abstract
entity, quite separate from what is actually running on this or that
>I did read it fully. I think the article, taken as a whole, gives an
>impression that the security risks are greater than they really are. But
>later I'll download and read the paper, and see how it compares to the
I confess I have not read the paper yet....
You make much play of the physical access thing - but I think was is
important is the level of triviality of the attack after physical access
is gained. If an attack can be performed ina few seconds in a way that is
transparent to the user, that is far more serious than an attack that
takes weeks and requires an electron microscope.
Received on Fri May 16 17:40:34 2003