Re: [vox-tech] Verify Ubuntu files
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [vox-tech] Verify Ubuntu files
Quoting Bill Broadley (firstname.lastname@example.org):
> So basically unless your are hyper vigilant and often verify your
> machine only run trusted kernel/boot media, and audit every of
> innumerable updates to your system, once you are hacked it's too late.
Certainly, "too late" in the sense that you cannot recover without
shutting down (by cutting power) and rebuilding the compromised from
datafiles, audited/reconstructed conffiles, and reinstallation media
only, and carefully avoiding trusting extant code / conffiles /
Also, "too late" in the sense that all tools on the suspect runtime
system are themselves necessarily suspect. But may give you meaningful
data of the "something's wrong" sort, even though the "nothing's wrong"
return values are doubtful -- and even the latter might work, attackers
being often clumsy and non-systematic.
In fact, the interesting question is exactly that: Whether and how the
compromise of a system _can_ be detected from that same runtime system.
Very often, it can by any admin who's paying attention, for a number of
reasons I hope to detail below (as I remember them ;-> ). The main
point will be: Your best, most effective detection is always an admin
who's paying attention.
> Well a single successful tweak can compromise a system in such a way
> that nothing else on the filesystem changes.
That is difficult, given competent monitoring and logging. (Note that
-- and I certainly don't say I necessarily always do this -- a paranoid
admin will send logging and monitoring data to a separate and maximally
protected loghost that serves no other role.) In my experience, both
local and remote attacks have tended over the years to require
filesystem activity that is then potentially detectable. Depending....
(It would be easier to comment intelligently if you would be specific
when you say things like "a successful tweak". I mean, you could mean
something like: "In 2003, I used stolen ssh keys to login as a legit
user, used the recently discovered dangerous bug in the brk() system
call to escalate to root, scp'd over a new uhci.ko, rmmod'd the real
uhci module, insmod'ed my trojaned one, and then rm'ed the evidence.
Ever since then, the fake uhci.ko in the running kernel instance has
done my dirty work without any detectable signs in userspace.")
> So for instance if /sbin/shutdown hides the tracks
It would take a pretty dim sysadmin to run /sbin/[anything] on a suspect
If, hypothetically, you suspect a system of being compromised, you frob
the Big Red Switch, boot from trusted maintenance media, and do your
checks from there. Note: In I-left-nothing-to-find-on-the-filesystem
scenarios like mine above, the admin might indeed find no smoking gun
in the sense that the kernel runtime state evaporated at Big Red Switch
time -- but it is also true, FWIW, that the system will not reload the
(Your phrase "compromising only things in memory" inherently means a
system compromise that, among other problems, cannot persist past
reboot. Moreover, it severely limits the scope of what the attacker can
accomplish, and generally attackers aren't satisfied with that -- a
point I'll return to, below.)
In any event, my experience is that alert sysadmins are actually far,
_far_ more likely to discover system compromise through observing
suspicious, changed system behaviour than through any other means, e.g.,
noticing an odd pattern of kernel "oopses", especially if it pops up on
multiple machines for no obvious reason. Or a sudden, unexplained spike
in bandwidth usage, or a sudden shortage of filesystem space even though
you can't seem to find the files responsible. Or, service ports that
are probe-able using nmap from nearby hosts even though netstat, sar,
etc. on the suspect local system claim nothing's there.
By and large, the reason people break into *ix hosts isn't just to play
tourist in kernelspace using direct manipulation of memory, trojaned
modules, etc., but rather to _do_ something with them, e.g. run botnets,
serve up files/spam/etc., bombard Saakashvili's personal Web site with
RST packets, and so on. When a host starts violating system-local
policy -- doing something it's not supposed to do, or not doing
something it _should_ do -- to any significant degree, the sysadmin
really ought to notice.
(Before I hear the ritual objection: If you're running an *ix system,
you're a sysadmin. If you don't want to need to be a very good
sysadmin, don't do risky things and/or seek help.)
> True, but tripwire seems best at detecting changed binaries, there are
> many places on a filesystem where you can add a text file that creates
> a security hole.
Um, if any of those "places on a filesystem" aren't also monitored by
the IDS, then the admin didn't set up the IDS competently. (Again, I'll
just mention lest the point be lost that I'm not a big Tripwire fan.)
> Sure it's possible to detect the change, but it's pretty easy to just
> make a hook for the next time you run the tripwire client, or apt-get
I don't want to seem like I'm complaining, but that sounds like quite a
vague handwave. Competent use of Tripwire means you don't rely on a
suspect system while updating the policy file. I mean, if you do, then
what's the point? (I'm pretty sure you _are_ assuming trusting the
local system while updating the policy file. So Don't Do That, Then.)
> Hiding things better on shutdown is common....
Not trusting shutdown is covered in IDS 101. ;->
> > (FWIW, I don't like Tripwire: Too slow, far too much hassle to admin,
> > too crufty; but I'm glad to give credit for what they did thoughtfully.)
> The common method, a signed local database is no more secure than the local
> RPM database....
This is specifically warned against in the Tripwire documentation. As I
said in my 2003 article: "Tripwire, Inc. recommends that, at a minimum,
you verify Tripwire files using the siggen utility provided for that
purpose, and preferably store them on read-only media." In fact, I'm
pretty sure they recommend not trusting the runtime system at all. I
> With common security policies requiring security patches be applied
> within days to a week (so it's often) it's rather rare to do the most
> secure shutdown, mount RO media, check the database, patch, mount the
> media RW, update the database, reboot.
You're quite right. This is one of the reasons why I strongly agree
with Marcus Ranum about patching: If you're having to spend significant
amounts of your time patching, you're running the wrong software or in
the wrong way.
vox-tech mailing list