AI Enables Predictability and Better Business
See Intelligence-Led Security In Action Attend a Live Product Demo

AI Enables Predictability and Better Business

January 4, 2021 • Caitlin Mattingly

Joining us this week is Aarti Borkar, vice president of product for IBM Security. She shares the story of her professional journey, starting out as a self-described data-geek through the path that led her to the leadership position she holds today.

Aarti also shares her views on artificial intelligence, and how she believes it can be an enabler for security and the business itself. And we’ll get her thoughts on welcoming new and diverse talent to the field.

This podcast was produced in partnership with the CyberWire.

For those of you who’d prefer to read, here’s the transcript:

This is Recorded Future, inside threat intelligence for cybersecurity.

Dave Bittner:

Hello everyone, and welcome to episode 190 of the Recorded Future podcast. I’m Dave Bittner from the CyberWire.

Joining us this week is Aarti Borkar, vice president of product for IBM Security. She shares the story of her professional journey, starting out as a self-described data-geek through the path that led her to the leadership position she holds today.
Aarti also shares her views on artificial intelligence, and how she believes it can be an enabler for security and the business itself. And we’ll get her thoughts on welcoming new and diverse talent to the field. Stay with us.

Aarti Borkar:

I call myself a data geek first before anything else. So I started in the world of data analytics, AI at IBM, just coming out of college, so I’m a long time IBM-er. Grew up in the world of data. AI and analytics comes naturally, I think, to me, I probably think in models. I spent a bunch of time building out a variety of businesses, like data privacy, data integration at IBM. I did crossover and spent time building out API management in the cloud, the initial journey that clients took to the cloud and the middleware required behind it.

And at every step of these areas, both on the data journey and the cloud journey, there was this constant of security underlying it, and it’s always been there. But just a few years ago, I jumped head on into a full security role with my current role of leading products for the IBM Security brand. And I think it really helps having that background in cloud and data coming into this role given what’s happening in the security world right now. But if you ask me to define myself in two words ever, I’m just going to call myself a data geek.

Dave Bittner:

Fair enough. Fair enough. What sort of transitions and evolutions have you seen within IBM itself in the time that you’ve been there?

Aarti Borkar:

To be honest, with my data hat on, which is where I tend to go to for questions like this, and its impact right now on security, there’s been a journey of going from reactive to proactive in every sense of the word.

In the world of data over the years, you did analytics, and you found out behaviors and you found out patterns after something had happened. And it’s nearly been an evolution over the last couple of decades where nearly everything we do has gotten to a proactive or a predictive behavior pattern. Now it’s not always great, but it’s great in a lot of different areas where that predictive ability allows us to do better business. It allows us to do better client service. It allows us to do a whole host of interactions better. So when I look at clients moving into the cloud and IBM helping them on that journey, so much of it is being better prepared, better planned, more structured and more ready for that transformation.

And a lot of that actually very often comes down to going from being reactive to being proactive and predictive in nature. And I love the fact that it’s coming to security. I mean, we will always have to do a bunch of defense. That’s just par for the course with the word security. But the more predictive we get, the more we can understand the environment. We can pinpoint that defense that is required while being very predictive. So for me personally, I just, as a data geek, you always want the world to be more predictive than reactive. And it’s a great journey that I think IBM has gone through, but the market’s gone through as well. And it’s not security-specific. It might have even happened in other areas of the business before security, but I’m really glad security’s on that precipice of doing all of that now.

Dave Bittner:

And what role does artificial intelligence play in that ability to be predictive?

Aarti Borkar:

AI helps us get there. I know a lot of people will tie AI to being predictive. I think what AI does is its ability to analyze a large scale of information in short durations of time, and looking for patterns, which would be very hard for human beings to do. And those patterns allow us to then start building more advanced models in AI and beyond that allow us to go down that predictive journey. So the speed and accuracy of finding these patterns, needle in a haystack kind of things in some cases, allow us to be able to get predictive in nature. It would be very hard without the power of AI to be on that journey.

Dave Bittner:

And how is that iterative? How do you make sure that the systems are on the right path? How do you make sure that what’s going on in that mysterious AI black box that we sometimes think about is actually doing the things that you want it to do and not going off in some direction that you don’t?

Aarti Borkar:

Yeah. I think I tend to find my inspiration here from the world of comic books, saying with great power comes great responsibility. Here’s what happens, let me take a security example. When you think of a set of detection patterns and you’re trying to use AI for it, you’re going to initially start by getting large volumes of data, looking across the globe, the types of patterns that are there, compromises that might’ve happened in different environments. You get that data volume, you get a set of experts that can understand that and start creating models to detect any of those patterns that you already have or variations on those patterns. That is an iterative process, because you’re not going to have every single pattern in the data you have. You’ve got to be able to expand and do some iterative work.

And then you give yourself a threshold of saying, okay, I want to catch 80 percent of the patterns, because then the human beings involved in the process can catch, and detect and investigate the remaining 20 percent or so on. So the initial process itself is iterative. If you notice, I said a few things that are critical here. One is the volume of data and the 360 degree nature of it. You want to be able to see infiltration patterns around the world, because they’re going to be different in say, Europe or Asia Pacific, or even different parts of Europe, like Scandinavia will see different ones from Central and Eastern Europe, et cetera, or different parts of the U.S., or different industries across the globe. So the variation and variety of that data at the very start needs to be pretty wide, because if you don’t have it, you’re going to have blind spots. Incomplete data creates blind spots for people that you’re going to miss something. So that’s step one.

And then I talked about experts that will help you. Now, security expertise, as an example, has wide variations. There’s different schools of thought on just attack vectors, but response patterns, et cetera. There’s also different experiences if you are in different geographies or came from a different background, if you were in government somewhere elsewhere, et cetera. The subject matter experts themselves cannot have any biases in their collective thinking. Now, human beings have bias period. That’s not a question mark. But if you create the right team of people such that there is no bias in your thought process, when I look at some of our teams at IBM, like the X-Force teams, you really feel you have one of everything. It’s a nice social experiment for me to just watch. But what it does is it creates a cognitive diversity in that process as well.

So you’ve got the data that doesn’t have blind spots, you manage the cognitive diversity in your experts around it. And then you have standard things like algorithmic bias, which is just computer science jargon for making sure you’re not missing anything as you write the actual models. But the first two are really important, especially in the world of security, because we’re in the business of finding every little, tiny gap that might exist and you want to protect it. The variations and the mistakes you can make if you’re building a marketing model, nothing against marketing, but if you send two extra emails to somebody with a marketing language, it’s not going to be life-changing. If you’ve got a bias, so you’ve got a blind spot in a security use case, that could be life-changing. So making sure those biases and blind spots don’t exist in the data and the individuals that form the team is very critical.

Dave Bittner:

Are there ethical concerns here as well as these algorithms are put together, as they’re deployed, as you all are making decisions as to how you’re going to dial these things in and put guard rails on them and things like that? Does that come into play?

Aarti Borkar:

I’ll give you a personal opinion here. I think in regular technology you can actually write tech and you can pull it back. You can say, oh, this program didn’t work. I’m going to delete it. It’s very hard to do that systematically if you’ve trained an AI environment, especially if it’s a self-learning model that has been deployed widely. It is really hard to pull back every element of it, or it’s very hard to make it unlearn something. That is not a trivial practice. I joke about it being like training a child. If the child suddenly saw a whole bunch of bad habits, it takes a lot of effort to make that child unlearn those habits at that age because it’s just seen a bunch of things it shouldn’t. AI nearly behaves that way. So we have to be far more conscious of our actions and our decisions as we build it and as we lay it out.

Now, you can’t always test, as somebody who was going to procure this AI, you can’t always test every element of it. It takes a lot of skill. If you had the ability to test every element of it, you wouldn’t be procuring it, you would have built it yourself. In that case, the ethics of the company you’re working with, the ethics of the team that you’re working with definitely become important. And the ethics line the outcomes. So if you have a well-defined set of outcomes, and those outcomes are ethical and you’ve managed to reduce bias and blind spots from the assets, and the tech and the people behind it, then you can convert the ethics, which is a nebulous term, into an actionable ethic set up in AI. So the outcomes need to be ethical and the data and the people need to be as unbiased as possible, and then you’ll probably get the outcomes you’re going for.

But yes, in the larger sense of the term, ethics are really critical. In a more commercial sense, being able to trust the people that you’re working with or buying from in the world of AI becomes very important as well.

Dave Bittner:

I’d like to get your take on threat intelligence and the part that you think it plays in an organization’s defenses?

Aarti Borkar:

So one, I might be slightly biased, but clearly it’s very, very important. But it’s what we do for a living, so we clearly think it’s critical. Now, I’ll tell you why. I think in making some of our threat management, or even other security elements like identity security or fraud, actionable and more predictive in nature, having that threat intelligence of the right caliber and granularity becomes critical, because it can start becoming the foundational data that you can now start building some of those models on. So in the absence of that, you don’t have a wide spectrum of data coming from around the world that you can then build mission critical models with. You can build specific models on detection patterns, or response theories, or in case of fraud, you can actually combine that threat intelligence with a level of behavioral biometric models and you now have a risk-based fraud engine.

So it’s one of those data flows that need to be very much a part of a lot of different areas of response. Now, it’s critical that this information is of the right fidelity and caliber, as well as updated. So you really want real-time threat intelligence piping through your models in some of these cases. Not required everywhere. But some of the core threat use cases for AI and ML nearly need that information piping through. So from a security perspective it’s obviously critical, all of us will agree. But from an AI, ML perspective in the world of security, I think it plays a really important part.

Dave Bittner:

You mentioned IBM’s X-Force team, and I’m familiar with several of the people who work there or have worked there. I’ve had the pleasure of interviewing several members of that team. And I’m curious, you mentioned the variety of folks there, the different types of thinking, that having that diversity of thought leads to better outcomes. I’m curious for your own perspectives as a leader, as you’re assembling your own team of people, what goes into that? What’s your strategy for putting together an effective team?

Aarti Borkar:

That’s a really good question. And maybe I learned how to build these teams nearly from how AI teams are built very often. But I’ll use an example of, if you ever look at a math department at a university, more often than not it has the most cognitively diverse set up, and it makes you wonder why? Why would a math department go out of their way to be so cognitively diverse? Well it’s because even in the most foundational sciences, like math, the contradicting opinions tend to create a team that allows you to get closer to the right answer. So your errors cancel each other out, your blind spots cancel each other out, and so you never end up in a place where you’re very far from the final right answer. And reading some of that philosophy of how these teams have been created and growing up in the world of data and AI has actually played a big part in the way I create teams.

And so when I walked into the security organization at IBM and I saw the X-Force teams, I’m like, I don’t know if they did it purposefully and they had the same thought process and math model in their head, but they seem to have achieved it, nonetheless. There’s two parts to it. One looking for people that are required to get the outcome of the team that I’m building, but are different enough from the other members of the team where you have the right amount of overlap, but you have the right amount of net new skill that wasn’t part of the team, and that collectively you have the skills required to go after the end goal. Now there’s obviously other elements, like culture, and behavior patterns, and growth mentality and all the other things that we think about. But at least I learned very early looking at some of these academic models … When I say academic models, they’re models that academics use at university on how to build that cohesive team.

And honestly, security needs that more than anywhere else, because there isn’t another part of the tech landscape that is where a gap in part process could have such a dramatic, detrimental effect. One tiny gap creates a much bigger problem in the world of security than it ever would anywhere else. And so, not having any blind spots on your team is that much more important.

Dave Bittner:

What sort of recommendations do you have for someone who’s considering coming into the industry? I’m thinking of someone either coming up through school, or maybe thinking about switching from some other line of work.

Aarti Borkar:

My first reaction is no one, literally no one should say, oh, security is too hard, or security is too different, or I’m not meant for it, because honestly the more variety of thought we have in the world of security, the better off we’re going to be. So we nearly need more of that than less. I think we do need that diversity. I think security started as the most diverse set. If you look at security teams from 15 years ago, you take a picture of any of the top line security teams, or you look at any government security team from that time, you see a lot more cognitive diversity than you see today. So we actually want some of that. It helps to have people who have behavioral sciences backgrounds, and economics backgrounds and things like that, because all those skills are required to get the answer right.

So one, never ever think that you don’t belong in security because you probably do. And I think the second bit of advice would be know what you’re getting into. I mean, you’re getting into a trade craft that requires a lot of passion and a lot of focus. It goes after a very fundamental human need of protection. So a mix of feeling like you belong when you walk in, and then having the passion to solve this really hard and continuous problem, if you have either of those or both of those things, you’d be a perfect fit in the world of security.

Dave Bittner:

Our thanks to Aarti Borkar from IBM Security for joining us.

Don’t forget to sign up for the Recorded Future Cyber Daily email, where every day you’ll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.

We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast production team includes Coordinating Producer Caitlin Mattingly. The show is produced by the CyberWire, with Executive Editor Peter Kilpe, and I’m Dave Bittner.

Thanks for listening.

New call-to-action

Related Posts

China’s Ambitions Toward Digital Colonization

China’s Ambitions Toward Digital Colonization

August 2, 2021 • Caitlin Mattingly

Recorded Future’s Insikt Group recently released research outlining China’s attempts at what...

You’re Not Really Ready for Ransomware

You’re Not Really Ready for Ransomware

July 26, 2021 • Caitlin Mattingly

Joining us this week is Ryan Chapman, Principal Incident Response & Forensics Consultant at...

Cutting Through the Cybersecurity Noise and Chaos

Cutting Through the Cybersecurity Noise and Chaos

July 19, 2021 • Caitlin Mattingly

Our guest this week is Ryan Naraine He’s the creator and publisher of Security Conversations, a...