Declan Ingram, Securus Global Practice Manager talks about IDS/IPS security at Kiwicon 2007. Broadcast here at Patrick Gray’s excellent weekly IT Security broadcast, Risky Business.

Synopsis: “When you consider the system as a whole, there are plenty of ways to bust an IDS / IPS. From the wire to the incident response team we will work through various limitations and examples of potential mischief.”

  1. Big Galoot says:

    Dec – your comments on being a traffic log analyst were quite amusing and came from coalface experience.

    But this is what gets me.

    Ya can’t reasonably expect any human to sit around for 8 hours a day, 5 days a week, reading log files and remaining focussed on the job at hand. Its simply unnatural and soul-destroying.

    Stating the obvious, the task of interpreting data in a meaningful way requires intelligence and an analytical brain. A higher-thinking, join-the-dots person.

    But the act of reading volumes of text/data on a screen itself can be repetitive, tiring and boring in the extreme. Not the kind of occupation ideally suited to your higher-thinking analytical types.

    The skill set required ? A person with high analytical skills who is accepting, willing and able to perform boring, repetetive, tasks – often whilst rostered on the graveyard shift. And remain on the ball all the time.

    A rainman automaton (accepting of boredom & repetition) might have the skills to see anomalies or patterns in the voluminous traffic/logs, but may not have the higher-thinking skills to see a bigger picture or able to separate the wheat from the chaff.

    What is the answer ? Buggered if I know! Just throwing it out there..

    But I seriously doubt that *any* human is perfectly suited for the role of security log file analyst. We haven’t evolved as a species for reading endless log files day in & day out.

    We’re simply not wired for it.


  2. D2 says:

    Just throwing some stuff out there on the off-chance someone’s interested.

    D2’s daily keywords -> Splunk. Security Metrics. Total Information Awareness. Visibility. Surveillance.

    Schneier -> “Without monitoring, you’re vulnerable until your security is perfect. If you monitor first, you’re immediately more secure.”

    “Reclaiming Network-wide Visibility Using Ubiquitous Endsystem Monitors”

    Information valuation?

    I propose IT and IT Security/Risk *is* change management. First you must see. Then you must measure. Then you must manage.

    “Organisations with ‘open networks’ want IPS to police their highways.
    Organisations with ‘closed/segmented networks’ use internal firewalling to restrict passage to flows that are deemed ‘good’, but most of the time they’re swiss cheese!

    Organisations are starting to see the benefit of ‘extrusion detection’, with non-production routed darknets.

    We need to only permit the good stuff and then enumerate the bad stuff inside the good stuff. How do you define the good stuff when sometimes organisations don’t even know themselves, don’t want to know, or don’t care what’s on their network? Asset and flow classification is a big, never ending job! It’s very hard to spot bad stuff inside good stuff and very resource intensive.

    Netflow helps. Baselining helps. Anomaly detection helps. Having management that understands, cares and realises the intangible, unquantifiable(metrics?) helps + experience goes a long way.”

    EOBP ( End of Brain Dump )

  3. D2 says:

    eeek, test… did big comment go bye-bye?

  4. Sorry D2…sorted now. It was caught in the “intelligent” spam filter.

    D2, people could do worse than follow your links. Knowing you for so long, your links are rarely ever not worth a read….some are out there but none boring.

    Anton Chuvakin is probably one of the industry leaders in log file analysis and his work is well worth following. His blog is just a start. Work your way from there. He is also a well quoted specialist in this field and IT Security in general. I am honoured that he reads and feeds BorB so I have a chance here to reciprocate:

    BG, your comments are valid as usual and without wanting to offend your fan base, I believe Dec did cover some of these issues in his talk – ie; around how the systems use the database and supposed intelligent and automated (ie; programmed) analysis of the log information and it’s failings… saying that, it does in a roundabout way come back to you. This automated analysis is programmed by humans and herein lies the problem. Stephen Northcutt’s original books on intrusion detection are still worth the read and outline the reliance on human intelligence and experience in this field. So from that perspective, you have a point. But saying that, I know there would be a lot of guys if given the chance to do it **properly** for an organisation, they would jump at the chance, because it is interesting work.

    Dec’s talk really needs to be played to every organisation doing IDS/IPS. Almost all installations we have come across are not worth the investment the company has made…useless is the best description I can find. There’s a hundred posts in BorB that support that, thousands in other blogs by security specialists but also hundreds of sales guys who could not care less after the PO was raised.


  5. D2 says:

    D2 throws his propeller back in the ring and tosses around some buzzwords and concepts. Biggest is that of time and ability to ‘effectively’ surveil. Recently I needed the ability to run IDS at 10Gb, soon 20Gb+, good luck! Let’s just say the Cisco 4Gb is waffle with the IDSM’s/etherchannelling, not full duplex!, even 4270’s.. Tippingpoint.. good luck catching anything! Sourcefire 3D9800… dunno…carrier class blackbox??? Main point is how to demonstrate the losses avoided via the ‘blocked’ attacks or ‘what if’ mentality, when you can’t even enumerate the value of data blobs/objects, messages or flows passing through your network, let alone ‘true’ aggregated value of nodes that provide multiple services, whether revenue generating or support infrastructure. I am pretty much at the point of.. IDS/IPS is a waste of time and just an overhead. Surely we need to stop playing the re-active, patch management, highlight after the fact, tack on more complexity…. and focus on enumerating the good?

    I’d like to see ‘trusted’ binaries run a fingerprint of DSCP values that are accepted on the network to help define the good. The fingerprint and keys change over a time interval for an organisation. Foolproof? No. A dream? Perhaps.. does it stop someone rooting a machine, no… could they learn the cycle? Sure. Any service surface is open to attack.

    Enumerating badness is a subsequent activity to enumerating goodness. We don’t enumerate goodness well enough!

    IT in my mind is about managing change and information lifecycles ( I’m not moving your cheese though! :) . The playing field is always changing. What is anomalous? What is integral traffic? When will we realise the physics have changed. Time is one of the key differentiators. Intelligent cognizant attackers are the enemy, not statistical random car crashes! What we can in theory control and define is ‘good stuff(tm)’ :) As the complexity grows with guest OS’s inside separate VRF’s inside virtual switches and running on virtual infrastructures.. we need to focus distinctly on the endpoints and not the transit paths… as is evidenced with VPN/SSL and the overheads processing MPLS/VPLS etc. Let’s not forget the control plane my friends…. we focus too much on the data plane. When routers/switches are pushing the wirespeeds, how the hell can a third party device process more headers and data than the forwarding devices without significant latency, packet drops or failing open?

    I am not a huge fan of agents but host based flows and targetted local host application IDS/IPS is perhaps the way to go? Are you going to IDS/IPS all your 10G+ links to ESX/Mid-range frames and farms etc? DOH! New paradigm please!

  6. Big Galoot says:

    I take on board what you say about guys wanting to jump in and do this sort of work. I’m sure there are bucketloads of ‘em. No argument there, nor is there any criticism intended for anyone wanting to do it. The job title ‘Security Log Analyst’ also has a degree of sexiness to it. :-)

    What I am suggesting is – that there would be very few humans perfectly suited to it.

    I’d be interested to know how long these guys last with and with their sanity in tact.

    Its precisely because of the automation and repetition and reliance upon databases etc that the human risk factor of boredom can come into play.

    Its well known in other industries such as aviation, and widely studied, so why is the security industry any different ?

    With my apologies in paraphrasing Dec, its the “Xbox” factor, this is something he touched upon, and I believe very valid in any industry where automation or repetition is a factor.

    Patrick Gray summarised: “Between 24 X 7 monitoring staff — yours or outsourced — slacking off and playing Xbox instead of reading real-time logs.”

    My apologies if I’ve misinterpreted anyone here.


  7. Declan Ingram says:

    @ BG and DD,

    You are both right. There are people who work very hard to do it right.. and there are people who can’t concertrate for long enough (and of course, many who just don’t care). It is unfortunate that the people who do it properly are a) expensive and b) dend to be moved out of monitoring and into “more important” things.

  8. RuF says:

    Obviously the job description meant to say 24×7 reading (b)logs… Personally I find other people logs more interesting than my own filetype:log :)

  9. [...] again, lets go to the contract/SLA and see what is actually going to be reported. Lets regurgitate Dec’s presentation at Kiwicon again as a measure of what needs to be understood and addressed. Surely the cloud will [...]