Podcast

Making Sense of Artificial Intelligence and Machine Learning

Posted: 24th July 2017
By: AMANDA MCKEON

Artificial intelligence (AI) and machine learning (ML) are hot topics in cybersecurity, threat intelligence, and beyond. We hear the terms casually tossed around in conversation, we’re bombarded with AI/ML marketing, and of course, there is no end to the references in movies, literature, and pop culture.

Unfortunately, we’re often missing the context or explanation needed to know what they mean or why they matter. Some say AI and ML will be our virtual saviors, others offer cautionary tales of bots gone wrong.

In this episode, we welcome back Christopher Ahlberg, CEO at Recorded Future, and Staffan Truvé, Recorded Future’s chief technology officer, for a wide-ranging, spirited discussion to help sort it all out.

This podcast was produced in partnership with the CyberWire and Pratt Street Media, LLC.

For those of you who’d prefer to read, here’s the transcript:

This is Recorded Future, inside threat intelligence for cybersecurity.

Dave Bittner:

Hello everyone, and thanks for joining us for episode 16 of the Recorded Future podcast. I'm Dave Bittner from the CyberWire. Artificial intelligence and machine learning are hot topics in cybersecurity and beyond. The terms get casually tossed around, often without context or explanation, to the point where it can be hard to know what's actually meant. Some say AI and ML will be our virtual saviors, others offer cautionary tales of bots gone wrong.

On today's show we welcome back Christopher Ahlberg, CEO at Recorded Future, and Staffan Truvé, Recorded Future's chief technology officer. We've got a wide-ranging, spirited discussion on artificial intelligence and machine learning. Stay with us.

Staffan Truvé:

My favorite definition of artificial intelligence is when you have a machine doing something that, if the same thing was done by humans, you would say that that was an intelligent thing to do.

Dave Bittner:

That's Staffan Truvé.

Staffan Truvé:

So, it's very hard to get any other definition. I think, primarily because it's a constantly changing field. There's actually a tendency that once you understand an area, then it goes from being artificial intelligence to being algorithms, in a way. So, you're sort of constantly expanding the boundaries to what you can do, in that way.

Christopher Ahlberg:

Recently, a smart person I met, he made the point though to say that, "The problem with using the word AI, is that now you end up in this whole bots and humanoids and robots running around and taking over the world.”

Dave Bittner:

That's Christopher Ahlberg.

Christopher Ahlberg:

With machine learning, it's a little bit more of advanced self-learning algorithms. It's a little bit more down to Earth, and let's go solve some problems, rather than getting bots to take over the world.

Staffan Truvé:

I can add to that. When you start talking about AI, it very simply becomes philosophy, whereas machine learning is statistics.

Christopher Ahlberg:

Extension of statistics and applying those statistics to solve some real problems. In many ways, if you're good at machine learning, you don't really care whether it's a 200-year-old statistical technique or a one-year-old new "machine-learning technique," whatever is best at solving the problem versus to someone's point of ... when you're in AI you tend to be like, it's more about proving out that your thing holds up to a “true-ing” test, versus not.

Dave Bittner:

Does AI and machine learning, do either of them suit particular tasks better than the other?

Staffan Truvé:

I would say ... again, you know ... there are different phrases for different things, but I would say that machine learning is one of the primary technologies you use to build AI systems, if you like. That's the way I would phrase it. The domain in which you can apply machine learning is defined primarily by the availability of training data. The whole trick is that you can only use machine learning if there is a lot of training data for the algorithms. Otherwise you can resort to other things, for example, rule-based systems, which is another way of building artificial intelligence systems.

Christopher Ahlberg:

I would avoid anything that is trying to say, is AI is better than machine learning as to one, some qualitative, sort of differentiation. It's better to think that machine learning is a part of the AI world. Many, many, years ago, I'm going to age myself, but in 1991, I took Staffan's AI course. The spring of 1991. Can you imagine? Probably half the listeners here were not alive at that point. I took Staffan's AI course, and I was telling my son the other day, that if I picked up the AI book from back then — I should do that, I still have that in the bookcase — I'll bet you pretty much every damned chapter is still up to date.

Not a lot has actually changed since then, and machine-learning techniques was one set of the techniques and there's other techniques and it's sort of all remained pretty — static is wrong because there have been a lot of cool developments — but the actual fundamentals are remarkably stable. Isn't that fair to say, Staffan, since you were my professor back then in 1991?

Staffan Truvé:

Yeah. I would agree. The two things that really have changed is one, thanks to Moore's Law, now we have much more storage and we have much more processing power. And the other one is the availability of data. Again, it's the availability of data, which makes the difference here. I think it's interesting to note that the reason we can do machine learning in interesting ways at Recorded Future is that we have tons of data.

Dave Bittner:

Yeah. Just as an aside, I remember as a teen, working with 8-bit computers, you know TRS-80s, and Apples, and so forth, and the hot thing then ... Do either of you guys remember ELIZA?

Staffan Truvé:

Yes.

Christopher Ahlberg:

Absolutely.

Dave Bittner:

So, ELIZA was an early natural language processing computer program. It was created back in the 60s at the MIT Artificial Intelligence Lab, and mainly, what it did was it simulated a psychotherapist and it was pretty convincing, especially for a computer at the time. That was, for many of us, the first taste of something that made you feel like you weren't just interacting with a machine. Like, maybe there was more there, even though obviously very simple at the time.

Christopher Ahlberg:

And that's a good distinction where, ELIZA was not machine learning, nor is still machine learning to this day. Isn't that fair to say?

Staffan Truvé:

That's exactly a kind of rule-based system, an extremely simple one. That also, actually, I think ELIZA illustrates another thing. That's the ... you know, as humans we very easily anthropomorphize. We are very eager to see intelligence or to think that it's system-based intelligence, even though it's just following a very simple scheme mod to produce a dialog.

Christopher Ahlberg:

I think that's a good point, and if my other old professor was here, Ben Shneiderman down from University of Maryland, he would say exactly that, and he would say that the bottom line is that those anthropomorphic algorithms are very unlikely to be the best ones. It's better to be sort of down to basics here and figure out, what's the algorithm that could actually solve the problem at hand.

The interesting part as we get into talking about cyber here, who cares about whether it's AI or machine learning, or whether it sort of smells like a human or smells like a machine. What actually is interesting is whether it can help us solve problems that we otherwise have had a hard time dealing with. For me, the big thing in "cyber" is to be able to make judgments about yet-to-be-observed indicators. What can I say about an IP address that I've never seen? What can I say about a domain that I've never seen? What can I say about a threat actor that is being dormant that may come alive? Those sort of problems are those that made nicely the ... applicable if you want, to machine learning.

Staffan Truvé:

I actually touch on another important thing. I think very often you think of AI as replacing humans, so you know, the really interesting stuff is really when you apply the same algorithms to the things which you cannot do as a human. Just as Christopher mentioned here to be able to, for example, predict malicious IP addresses. It's using the same kind of algorithms, but it's actually doing something which is such a multidimensional problem that humans can't do it, but the machines can do it. They don't see the difference between that very hard problem for humans, and it’s nothing which is trivial for us.

Christopher Ahlberg:

So, if we think about what we do here at Recorded Future, where we have a big stack of technology that helps people with ... ultimately provide them with this sort of visibility into cyberthreats and help them discern and figure out what's most important to them. We apply AI techniques and everything from sort of, we'll call it malevole language consumption, NLP techniques to be able ... you know, we ingest content in 30 different languages at very large scale from across the world. So, as we take that in, conceivably that might be possible to be done by humans, but I don't think so. And to do it at the scale, we had to use natural language processing techniques, AI techniques, to do that. We also apply AI techniques at the other end of the spectrum here when ... kind of to the earlier discussion, we want to be able to make judgments on IP addresses that we've never seen before.

This can fit in at many different levels and can be helpful in many of those parts.

Dave Bittner:

What about intuition? That sense that humans have that ... the, I can't quite put my finger on it, but something just doesn't feel right?

Staffan Truvé:

This is exactly where the humans should be used. We like to talk about threat analyst centaurs, you know, sort of the combination of man and machine? That's exactly the idea there to allow the machines to do what they're best at and to then give humans the time to focus on where we are especially good at things, which you cannot easily do with machines.

Christopher Ahlberg:

That's what we've always done ... throughout our careers we've been spending a lot of time thinking about how you help people solve very hard problems, see? For example, helping people discover new drugs, helping people find new oil fields, help people chase terrorists, help people find new opportunities for financial investments, or in this case, help people detect new cyberthreats. And as you think about those problems, they're not deterministic. Or the better way of saying it, there is no easy button. People always, outside our domain will come to me and say, "That's great. So you can predict exactly when the next cyberattack is going to be," and I'm like, "Dude, that's not how it works. There is no easy button." There's very unlikely to be an easy button for the next decade or two here.

But, if we can help out so that machine does both the boring information consumption at a scale of nation states, and also, beyond what nations states can do today, actually organize, sift through this data, and in some cases make predictions about it, and then visualize that data, and that's when it really gets interesting. Visualize that data for the human so that person can apply intuition, and that's many times when intuition can really come into play because now you can do visual extrapolation and there are certain things you're cognitive of, and more importantly, your perceptual system as a human is remarkably good at.

So, here we're now able then to take the stuff that the computer's really good at, a high degree of consumption, large-scale processing, apply AI techniques to that, and organize that information into visual displays of information where the human can apply its intuition, both cognitive and perceptual capabilities, that can solve the hardest problems in cyber.

Staffan Truvé:

That's good. I think there's also an interesting symmetry here. On the opponent side, the bad guys, they're using a combination of sentient opponents to humans and algorithms. So, to combat that we need to do exactly the same thing. We need to combine machines and clever people on our side as well, on the defending side.

Dave Bittner:

Is there any sense that the bad guys are using AI and machine learning?

Staffan Truvé:

Absolutely.

Christopher Ahlberg:

Yeah, no, I think there are ... now the question is here what do we call AI and so on, but you know, one of the big challenges for an opponent is to generate domain names for THINK! campaigns and other sort of things. So, domain name generation algorithms — I'm using the wrong term for it there — but, no question, people have used algorithmic approaches to that, I don’t know if you have other examples-

Staffan Truvé:

I think a phishing campaign is a good example, where it's a question of creating credible emails, for example, which people click at, you know. So, it used to be the fact that either you did sort of high-volume phishing attacks with very simplistic approaches, or you could do spear phishing where a human engages in writing letters, which would lure someone into clicking on something.

Clearly, I think we'll be seeing — or maybe already are seeing — machine learning being applied in the same way there, to create better phishing attacks, automatically.

Christopher Ahlberg:

But, I think it's fair to say that there's few threat actors who are consciously applying AI techniques. I don't think we see the hacktivist doing this. I don't think we really are seeing criminals doing that. Now, if you went to the top intelligence agencies, be it in the west, or on the enemy side of things, you know, clearly they have research programs to do this. You can imagine all kinds of ways of applying the AI techniques on the attack side.

Staffan Truvé:

I think you're right. I mean, I don't think it's being used on a large scale. And the reason is, of course, they don't need to because the simple brute force methods they use work so far.

Christopher Ahlberg:

Yeah.

Dave Bittner:

You know, when I walk around a show like RSA, it seems like every other booth, or really these days, at practically every booth they're talking about their AI and ML capabilities. If I'm a consumer of these sorts of things, how do I cut through all of those buzzwords and ensure that my vendor is actually using the real deal?

Christopher Ahlberg:

You don't, is probably the real answer to it. You sort of don't really care and you say, “Look, show me the results.” Now, if you like the results, then maybe you like the technology. I would not be caught up on whether it's some endpoint detection guy that says he's using some algorithms for this or that. Or even what we've been talking about here today, focus on the outcome, focus on the results and less on whether they're on to the right AI technique or not. That's not what you should judge it on.

Now that's said, if you run a large-scale information security program at a bank, or oil company, or you something where you want to make sure you're putting your toe in the water and trying some things, then I would ask people to say the simple, you know, “That's great, you're using machine-learning techniques or AI techniques, tell me more.” And I think once you've said that sentence, “Tell me more,” that's when you're going to pretty quickly know whether they're onto something or not. Pretty quickly you'll get a sense of what's there or not.

Staffan Truvé:

And if you look at the bulk of the guys that you meet on the show floor, they are doing one of two things: one is like the traditional anti-virus companies, they are using machine learning, essentially, to do pattern recognition to find new malware based on signatures, which they've been training on and so on. So, that's pattern recognition. The second one is to do anomaly detection. Again, you train a system with normal network behavior, for example, and then you can apply it to find strange behavior on a network.

That's the bulk of people using machine learning in cyber today, I would say. Then, there are a few like us trying to do harder things ... to do the natural language processing, the prediction of future events, and so on.

Dave Bittner:

And when you look at what you're capable of doing today and then you look toward the horizon in the next few years, what are the things that you can't do now that you would be excited to be able to do in the future?

Christopher Ahlberg:

What we've started to touch on here at Recorded Future is sort of on the inbound side of things, be able to deal with human language. For now, we've sort of had to do that in a way where for every new language we take on, we have to do a good amount of work. We think we can radically cut the amount of work we need to do when we take on new languages in a really good way. That's ... you know, as an end consumer I may not care about that, but I do think that once you care, that means we can more rapidly add more languages and get into nuances of language and those sort of things. But maybe on the other side is where it gets more exciting, which is, again, be able to discern and make analysis or produce analysis on information that is yet ... or indicators that are yet to be observed.

So, once you can start saying something like, this domain may be malicious even though you have never observed it before, or this IP address has never been observed before, but again, we actually have a certain confidence that it is part of a malicious family or will go malicious in the future. Once you know that, you're going to be able to start to connect that to threat actors and campaigns and toolsets, and those sort of things.

Likewise, I think we're going to be able to get to a point here soon where we can look at a vulnerability that is just fresh out of the gate and based on, you know ... and here's going to be a combination of classic statistical techniques and more AI-oriented or machine-learning-based techniques to be able to say whether this vulnerability is likely to be exploited, and do that in a way that doesn't need to wait for a month for the government to come back and give you sort of a score, then we can do that in a much more interesting fashion.

Those are pretty good blocking and tackling sort of things, then there are other sort of tasks. Wouldn't it be nice if I could just push a button and get me a summary of everything that happened in cyber world today? Automatically bright CyberWire. You know ...

Dave Bittner:

Hey, hey, hey, hey.

Staffan Truvé:

Host a podcast.

Christopher Ahlberg:

Yeah. Host a podcast automatically. No. But it's not easy though. Those are years out. You could imagine doing some simplistic stuff. But it's non-trivial to sort of get there. There's many reasons that humans are great at being humans, and machines are great at being machines. So the exciting part, I think, is letting machines be machines and do what they do really well, and then complement humans, not try to replace humans. That's sort of a fundamental tenet of what we do here at Recorded Future.

Dave Bittner:

I'm curious, are there any examples of things, say 20, 30 years ago or more where the common accepted knowledge was that this is something that computers will probably never be able to do, and then in the modern day, it's something that computers can pretty routinely do?

Staffan Truvé:

I think we can go farther back. If you go back to sort of the very birth of AI, back in the 50s, the impression people had at that time, was that the hard things to do were things like having a computer play chess. That was seen as the sort of height of human intelligence, whereas people thought language understanding and image understanding would be pretty simple things to do. Of course, we do it without thinking. And then, the realization came that it was exactly the opposite. Since chess is extremely structured and rule-based, it actually lends itself extremely well for machines to do it. It still took us decades, actually, until the mid 90s before a chess-playing machine actually beat a human grandmaster in chess.

At that time, what everyone said in common words was that image understanding, image description was extremely hard. We would still not be able to do that. Now, here we are again, twenty years later, and what's happened is that just thanks to the massive amount of training data and faster machines, machines are actually now becoming pretty good at classifying images and saying what's in them.

Dave Bittner:

Yeah.

Staffan Truvé:

So, I think it's interesting to see this ... rolling over time, it's sort of points in time, one thing is hard, another one is easy. It also shows that it's really hard for us to see going forward, what will be the hard and easy tasks.

Christopher Ahlberg:

Then you take that and you say, that's nice, but what is the equivalent of an image in the "cyber," and it's not easy. There are many images, and they're not very well-defined.

Staffan Truvé:

There isn't that kind of training data.

Christopher Ahlberg:

Yeah.

Staffan Truvé:

Readily available. So.

Dave Bittner:

I think in the general public, when people think about AI and machine learning it's natural to go to The Terminator. And I've even seen stories, from folks like Elon Musk saying we really need to keep an eye on this. From your point of view, people who are in the thick of these sorts of things, are there any real concerns about AI or are they overblown?

Staffan Truvé:

I think you should differentiate between ... these Terminator things, first of all, that's robotics. You know there's a whole different complexity in building physical systems, you know, which is beyond what we're talking about. We're just talking about software. I think there's a really long time before we need to worry about these sort of AI-driven machines wandering the streets, killing people. That's not a concern. I think there are other aspects of how AI will replace human jobs, of course, you know. Of course, we're going to be automating a lot of routine work.

Christopher Ahlberg:

I think the point is that on robots, sort of waking up one morning saying we're done with the humans, we will see tens of thousands of people being killed, maybe hundreds of thousands of people being killed, being run over by self-driving cars, before we see any conscious robots waking up and killing people. In the meantime, when all those self-driving cars have driven over or hit bunches of humans, I think we'll come up with all kinds of ways to stop robots from killing humans.

Dave Bittner:

Right.

Christopher Ahlberg:

So by that time, the robot has gained a conscience, a la T2, or Terminator, maybe we've solved a few of those problems along the way. We're pretty far away from robots getting a self-conscience or a self-awareness here.

Staffan Truvé:

I would actually partly disagree, because I think humans are the main cause of car accidents, so the cars might actually save more lives than they take. But still, the point ...

Christopher Ahlberg:

They're still going to kill thousands of people in the meantime, we're going to have to come up with ... let's assume that they cut the number of accidents by 50% or by even more, something drastic like 90%, we're still going to have tens of thousands of people killed by cars and I think we'll start putting all kinds of rules into those cars that, if they're even close to a human, you know, just stop them.

Staffan Truvé:

And maybe, actually, this will get back to the cyber term, maybe the bigger thing that we should worry about is hackers getting into the autonomous cars and doing things to them so that they behave in a much worse way.

Dave Bittner:

I even think about things like privacy where if these systems can, in an automated way, make the connections with the various parts of my life, all the public records, my social media, you know, all those sorts of things, my day to day travels, weaving together security cameras and places I've been and purchases I've made, I think people have concerns about their general privacy with that as well.

Staffan Truvé:

I think the privacy concern is one, and also circling back to what I was saying before about phishing emails. I think machines that get access to that kind of information will be able to provide phishing emails, which contain parts of your private information, it will make it extremely hard for you to understand that it's a malicious thing coming to you.

Dave Bittner:

Our thanks to Christopher Ahlberg and Staffan Truvé for joining us.

If you're going to be at Black Hat in Las Vegas, be sure to stop by booth 1553 to find out how threat intelligence can benefit your company and meet the folks from Recorded Future.

Don't forget to sign up for the Recorded Future Cyber Daily email where every day you'll receive top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.

You can also find more intelligence analysis at recordedfuture.com/blog.

And remember to save the date for RFUN, the sixth annual threat intelligence conference coming up in October in Washington, D.C. Attendees will gain valuable insight into threat intelligence best practices by hearing from industry luminaries, peers, and Recorded Future experts.

We hope you've enjoyed the show and that you'll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Amanda McKeon, Executive Producer Greg Barrette. The show is produced by Pratt Street Media with Editor John Petrik, Executive Producer Peter Kilpe, and I'm Dave Bittner.

Thanks for listening.

Related