Former GCHQ Andy France Targets Big Cyber Problems

May 7, 2018 • Amanda McKeon

We welcome cybersecurity leader and entrepreneur Andy France, in a conversation led by Recorded Future Co-Founder and CEO Christopher Ahlberg. Andy France’s career in cybersecurity spans over four decades, including positions as the deputy director of cyber defense for the UK government, along with positions at Darktrace, Deloitte, GSK, and Lloyds Banking Group. He serves on a number of cybersecurity advisory boards, and is currently the co-founder and director at Prevalent AI.

Andy France addresses the “big-picture” items in cybersecurity, considering what it might take to fix, once and for all, the fundamental issues security professionals face. He considers the often-used comparison of cybersecurity to public health, and provides advice on effective implementation of threat intelligence.

This podcast was produced in partnership with the CyberWire and Pratt Street Media, LLC.

For those of you who’d prefer to read, here’s the transcript:

This is Recorded Future, inside threat intelligence for cybersecurity.

Dave Bittner:

Hello, everyone. I’m Dave Bittner from the CyberWire. Thanks for joining us for episode 55 of the Recorded Future podcast.

This week, we welcome cybersecurity leader and entrepreneur Andy France, in a conversation led by Recorded Future Founder and CEO Christopher Ahlberg. Andy France’s career in cybersecurity spans over four decades, including positions as the deputy director of cyber defense for the UK government, along with positions at Darktrace, Deloitte, GSK, and Lloyds Banking Group. He serves on a number of cybersecurity advisory boards and is currently the co-founder and director at Prevalent AI. Stay with us.

Christopher Ahlberg:

This is Christopher Ahlberg. I’m the co-founder and CEO of Recorded Future. It’s great fun to be able to be here today, and today, we’re here with Andy France. So, first of all — Andy, it’s great to have you with us. Do you want to start by briefly introducing yourself?

Andy France:

Sure, and thanks for the invitation, Christopher.

I’ve spent the majority of my career — just under 30 years, to be exact — as a career civil servant in a UK organization called GCHQ. That’s the UK’s signals, intelligence, and information assurance organization. My last job there was as the deputy director in charge of cyber defense operations.

I left five years ago to go into the commercial cybersecurity sector because I could see a big gap between what governments were able to do in cybersecurity and what industries were able to do in cybersecurity. And so, I built a cybersecurity consultancy business. Then I co-founded a number of other companies, all of which have a very similar background, in that they’re all built around security data science and behavioral analytics.

Christopher Ahlberg:

I think you’re probably a little bit too humble here about some of what you’ve done, because you’ve done fantastic work! So, it’s quite awesome to have you here.

What we’re going to try to do today is, sort of, take a step back a little bit and avoid getting into the minuscule details and talk a little bit about the big trends and maybe even about how to, from a bigger picture point of view, help solve this problem.

And so, I’ll start it off by just saying, the last 12 months of cyber certainly has been crazy. Election hacking and massive botnets to metahacks — that has been my favorite term. Thinking of Equifax and SCC and law firms. Places where you hack not just one company or one organization, but you find places where you can get your hands on many organizations’ information in one place. So, we’re seeing this sort of exploration of things over the last 12 months and … Anything, in your mind, that we can learn from this? Beyond that, it’s been pretty miserable.

Andy France:

It certainly does feel miserable, doesn’t it? I think the last 12 months have highlighted just how vulnerable this integrated digital ecosystem is, that we now rely on. I always draw people’s attention to remembering the fact that the internet, on which all of this is built, was never, ever designed as a secure environment.

As we have become more and more reliant on it by layering services and applications on it, probably without thinking about the security consequences associated with doing that, I would argue, now, that the internet itself is part of our respective critical national infrastructures. And so, I’m afraid that what we saw the last year is just the new reality of what the world looks like. I guess what I mean by that is, without something fundamentally changing, this is just now the way of the world.

Christopher Ahlberg:

Here in America, we like to say, “You know, we’re going to have to change the tires of this car while it’s driving,” and many versions of that analogy. To be honest, that analogy is probably not close to what the actual problem is. Because it’s not like we’re just changing the wheels of one car here, we’re changing a network that is just so fundamentally embedded to everything that we do. So, if you think about these last 12 months, before we get into the details, what do you think that tells us about what we’re going to see over the next 10 years?

Andy France:

I have had the advantage of talking to lots of people around the world about cybersecurity, and without a doubt, the narrative has changed in the last 12 months. It was never about data theft. The totality of the cybersecurity narrative tended to be driven by vendors, by media, about your large-scale data theft, and you’ve mentioned a couple of those previously.

But that’s what it became about for far too long, and I think last year, in a bizarre way, it has been helpful in that now, more and more people are thinking of cybersecurity in terms of data destruction, data integrity, data disruption, and their understanding of the risks that they can now recognize and understand, because they have seen how those things played out in the last 12 months.

So, if your narrative has been aimed … This is stopping people who are stealing your credit card, that’s one thing, but actually, when the internet’s not there because there’s a massive DDoS, and your entire business has moved to the cloud, you’ve got a continuity issue. And so, I think what’s happening is, that narrative thankfully has changed, across all of those things I’ve spoken about. The question of what you do about it, though, I think is the one thing that is vexing everybody.

Then there’s suppliers, there are users, businesses, politicians. That’s the question of the day, I think.

Dave Bittner:

Andy, if I could jump in here — it’s an interesting topic you bring up, and I’m curious from a policy point of view. It strikes me that politicians have been reticent to draw any bright lines in the sand when it comes to cyber, whereas national borders are pretty clear-cut, and if you bring a military across a line — well, we all see you do that. But it seems as though on the cyber part, we’re still being a little hesitant about even declaring what would constitute cyber war. Do you have a take on that?

Andy France:

It’s very difficult, because the same narrative doesn’t exist in cyberspace. There is no national internet, there is no national boundary around a country, and what you’re seeing is, as I said, we’ve layered these applications and services on top of the internet. It’s very, very hard for some organizations to actually work out where their data actually is. And so, you’ve got this transnational, underpinning, large technological beast, that’s evolved, underneath us all, and we haven’t actually worked out some of the more fundamental policy questions associated with that.

You’re right, Dave. You’ve hit the nail on the head — the language doesn’t help. You know, we saw the U.S. talk about the Sony attack as an act of war. Well, you know, really? A commercial entity that was attacked? Yes, probably by a nation state, but is that really an act of war?

The technology’s got away from us somewhat, with the capabilities, and the architecture’s got away from us somewhat, and I think one of the things that I spend a lot of time talking to people is about actually getting back to some basic understandings of what we actually mean here, in terms of … What does it mean, in a cloud-enabled world, when we talk about where those data services rest? What does it mean, in terms of other countries who have realized just the limitations and the problems that we’ve got with the internet, starting to carve out what for them would look like a national internet system? That’s not going to work, I don’t think, and people will try to bypass that, for obvious reasons.

I think we are slightly — due to our own rush to add these layers and services and make money off of this — we are slightly struggling to work out what this means in this brave new world, but we absolutely have seen, in the last 12 months, the consequences of when a nation state or an entity decides to use all of the powers of that capability against another one, and what that actually does, in 2016, in 2017, in 2018. I think that’s the slightly scary part, that this has probably got a little bit away from us. We probably need to do a little bit more catching up.

Christopher Ahlberg:

Yeah, and when you think about it, it’s sort of … We can all wish for a more secure internet to appear, but it is what we have, I think, and I immediately land on a good old Rumsfeld citation from probably 2003: “You go to war with the army you have.” It’s never really popular to quote Rumsfeld, but in this case, I think it’s … I don’t know if “go to war” is the right analogy, but we go to war with the internet that we have. It’s the internet that we have that we’re going to have to try to make better.

But that said, I wanted to say, as technologists … I’d like to say, “Yay for geeks. I’m a geek.” We tend to solve problems with technology and just stock up more technology to solve technology problems we’ve already created. So what do you think of, again, taking a step back? Can we solve “a cyber problem” with more technology?

Andy France:

Indeed, and I think therein lies the problem. I think I’d answer your question with an emphatic, “No.” We can’t solve this with technology, because this isn’t just a technological problem. I think frankly, if it was, we would have solved it by now. It’s a much more fundamental issue than that. What I mean by that is, yes, technology is an integral part of what we’re talking about here, but so is our headlong rush to connect everything possible together on the internet, for a “better user experience,” in inverted commas, to make lots of money. Selling new services, gathered from data acquired in providing those services.

I think, fundamentally, if you look at the list of … For those that don’t know, there’s a list that’s been ironed since 2001, and its basic code is, “When a manufacturer identifies a vulnerability in a piece of hardware, firmware, or software, they publish it, and it then allows people to patch those vulnerabilities.” The list is called the CVE list, and there’s something called the CVSS list, which is where you score those vulnerabilities in terms of how damaging they are. Now, it’s publicly available — you can go to the site and you can have a look.

But since 2001, since we started counting these things, there’s been a year-on-year increase in terms of the vulnerabilities, as we are rushing to connect everything — kettles, fridges, cameras, and DVD devices, to put everything onto the back of the internet. We introduce more and more vulnerabilities to that platform, which is fundamentally flawed, because it’s not a secure environment. So, it looks to me as if we’re not going in the right direction — downwards — anytime soon.

And before you say, “Yes, but Andy, that’s because we’re creating more gadgets and more programs.” But we’re doing that without tackling the fact that we are baking vulnerabilities into those gadgets and programs from the outset. So, I worry that we are still suffering the buffer overflows as a [inaudible] vector in 2018, when we know how to write code to stop buffer overflows.

Block code reuse is a problem that we’ve got, because what happens is, we copy and paste code, and then we introduce that vulnerability that was in the original piece of software into many other platforms. There’s something quite fundamental to me, and that’s why I bring up the CVE list. I think it’s a really, really good thermometer of your point, Christopher, in saying, “Well, surely we can just out-tech the problem.” And I think that is the problem. I think we are just making it worse, because we are not baking security into those products. And I think we need to get pretty smart about that, pretty quick.

My plea is to get back to basics about security fundamentals. I hate to say that, because it does sound a bit of a cop-out to your question, to start thinking about, what does it mean to bake security into the outset, rather than trying to retrofit it always afterwards? That level isn’t working too well for us at the moment, so there’s a huge piece here for me about education.

Christopher Ahlberg:

When you start talking about copy-and-paste code … Now, for somebody like myself, whose coding abilities have gone downhill over the last 18 years as opposed to being improved, the only thing I could barely get away with is cut-and-pasting code, and that’s how the world is learning. So, I think you segued very nicely into education here, which is seemingly one of the few things that we really could go at this problem with, in a more fundamental way.

Then, I think about back when I was taught how to program in college — I guess I knew how to program before, but I was formally taught how to program. I remember being taught Ada, which was this language coming out of, I think, the U.S. Defense complex, but I could be wrong on that. I looked up Ada on Wikipedia, and it was very secure. I read some of the words here — designed by contract, extremely strong typing, explicit concurrency — a whole bunch of things around just building very secure and very solid code from the outright.

But, in practice, except for extremely complex, extremely structured organizations, it just never went anywhere. So what do we do about education to solve this without telling everybody? Get rid of your C Sharp language, or get rid of Java, or get rid of Python, because you’ve got to use this new fancy language that’s just never going to be used anyway?

Andy France:

Yeah, I wasn’t an Ada coder, but I certainly recognize how the guys and girls behind that were years ahead in terms of thinking about good-quality code, safety, and all those things. And I think what’s happened is, getting the product to market quicker and the user experience has always outdone security.

So, I guess, coming back to your point about education, my point is, this isn’t about us sitting down with computer scientists. This is about sitting down with the business development folks, sitting with the digitization departments of large organizations, politicians, schoolkids, mechanical engineers, software engineers — basically, anybody who has an input into how we’re building the new technology yet to be developed and delivered.

And of course, end users play a massive part in that as well. Because if people don’t buy stuff because they don’t like it, because it’s insecure, people will up their game and make sure that they write more secure code. So there is something about the whole ecosystem. The last problem that we’ve had is that user experience has driven everything, and I just think that causes problems.

We’ve got to get people. We’re building this. If you’re building a building, you have to understand the level that the building management system has in providing an environment that’s conducive for that building to function, whether it’s a hospital, a school, or a nursing home.

At the end of that, it’s going to have a computer in it somewhere, and all of the systems are going to be controlled by a computer somewhere, so if we’re thinking, “Okay, so how are you going to embed security into that from the very get-go?” What will happen is that we will gradually, over a period of time, start to harden and think back to where we were with Ada. We would build a secure code from day one.

Dave Bittner:

Andy, I’m curious — if I could switch metaphors for a second, how much do you think our situation parallels public health? I think about how you come at public health from many different directions. You immunize your children not long after they’re born, but there’s also an education component there. We all know now that it’s important to wash our hands. And you have things like herd immunity, where it may be not everyone is immunized. If you get enough people immunized, that’s good enough to keep some of these things from spreading. Do you think there are parallels there between public health and cybersecurity?

Andy France:

Absolutely. I think one of the problems that we’ve had is being able to get this narrative into a form that people can understand. I have always found that the linkage to public health has been a very useful metaphor. For example, risky behavior. If I’m perfectly healthy and I get out on a subway or a Tube, and I sit next to someone who’s sneezing all over me, and that person is taking no effort to mask that, and I am taking no effort to avoid that, the chances are, I am probably going to catch a cold.

Now, that is no different than going on the internet with no protection and thinking that being smart about how I interact with this entity out there is going to keep me safe, as I’m far more clever than anybody else because I’ve got a password that no computer is going to be able to work out. But of course, we all know that’s not true.

There is no doctor in the world that will guarantee you, anywhere in the world, with even all the years of medical research, that you won’t catch a cold this winter. We all know what that feels like, to catch a cold, and we all know what we do about that. We will drink fluids, we will rest, we will take appropriate actions to make sure that this common cold doesn’t turn into something far worse. And if we detect that it is, we get remediation help from the medical community.

It strikes me that, in the cyber world, trying to think of a business that will never, ever, ever have a cyber incident is a bit like saying you will never, ever catch a cold. It just isn’t statistically possible. So, if we can shift that dynamic to say, actually, the point here is, yes, you will catch a cold. But will it turn into pneumonia that will kill you? Well, if you don’t take the appropriate measures, there is a possibility that it can. Not saying it will, but there is a possibility that it can.

Getting people to understand what that actually means in terms of their online behavior — how to protect themselves, how to protect other people — that actually means, in terms of how you interact with the tools that you’ve got on your desktop and how you interact with the data that you’ve got, I think yes, the medical analogy works really, quite well.

Because once you sit down with people and say, “Look, no doctor will guarantee you that you won’t catch a cold. Sadly, there are vendors out there who will guarantee that if you buy a magic box, you won’t have an incident, and we all know that that is probably not true.” There’s a realization setting that we have to do … And I think, yes, Dave, it’s a very good way of getting people to understand that if you do sit next to someone who’s sneezing all over you, you’ve probably got a better chance of catching a cold than if you’re not.

Christopher Ahlberg:

Now, switching gears a little bit. I think a couple of days ago, we saw our dear friends in the great Russia making the point that nobody has ever properly attributed any cyberattacks to Russia, and maybe more importantly for my question, the idea that the Russian government was shielding cybercriminals is ludicrous. To me, that screams, “Wow, they actually say that?”

But if you just go beyond that and ask, can police and law enforcement really have an impact here when we’re dealing with international crime? Which I think is what it is, if we keep staying with the criminals. Can we really make a dent in this sort of problem until we get proper legal structures set up between us and countries where the legalities are different? I guess I’m asking you, what’s the role of law enforcement here? Or maybe more importantly, the legal agreements with countries that we currently don’t have these agreements with?

Andy France:

I think that’s a really great question, Christopher, because it allows me to say two things. I think firstly, the role of attribution — who is doing this to each other — attribution is phenomenally difficult, even when you have got all the capabilities of state. It is very, very easy to look and feel like the actor and be completely different. False flagging is a regular occurrence.

So, what I would say is, unless it’s an entity of state that’s standing up and actually saying it in a very transparent way, I take a very long-term view of attribution. It’s difficult — it’s phenomenally hard to do properly, and I think it is the role of government and law enforcement to do that, rather than commercial businesses. Having said that — and you hit the nail on the head — this is a transnational problem. Not everybody that the police or law enforcement or intelligence agencies have to deal with share the same view of the world as yours.

It’s a very, very difficult problem. It’s a transnational problem, as we’ve said before. The world has got a lot more complicated, and police and law enforcement are not omnipotent. And I think, you know, long overdue is a proper conversation about what we think our police and law enforcement agencies are able to do in this world, where the technology’s fast getting away from them. They’ve got a tough job. It’s even tougher now, with all of the things that happened in the last few years, driven by some of the technology companies.

I think that’s the slightly sad thing here, in that the narrative isn’t being driven by consumers. The narrative is now being driven by the tech companies and the government, and one of those entities is taking a polarized view of each other. I think, back to the conversations we were previously having, this is fiendishly complicated. There is no right and wrong answer to this. We have got what we’ve got in terms of the infrastructure and the capabilities.

What I am sad about is, we don’t seem to be having that proper grown-up, sensible debate about this, where that narrative isn’t driven by either one of the tech companies or government. They have got a difficult job. And I understand the tech companies’ position as well. I want my communications and I want my data to be secure, but I also want the government to be able to investigate me if they think that I’m about to do something illegal, at a certain threshold.

The likelihood of that sensible debate, I think, is stymieing any advance that we might make in this space. I think law enforcement and police have a huge role to play, I think politicians have a huge role to play, but we have to make sure that they understand that this isn’t just a technology problem. This is now an economic problem. It’s a growth problem, it’s a society problem. There is a risk that it becomes a very binary, transparent, technical discussion, between tech companies and government.

Christopher Ahlberg:

Which is where it’s tended to go. I know it’s true, it’s true. My final question, given that my own background here is the threat intelligence guy, what role, if any, does threat intelligence play in what we’ve been talking about? We’d love to get your take on that, as we get close to wrapping up here.

Andy France:

I would say that it plays a huge role, but let me caveat that. Threat intelligence in and of itself is not a silver bullet. If you haven’t done the cybersecurity basics — if you haven’t thought about logical access management, segregation of networks and duties, effective access control, the things that a cybersecurity control will tell you — then actually, having a threat intelligence capability can be a massive distraction. Because everybody looks at the sexy stuff, and doesn’t want to get involved in the basic stuff, of just configuring the networks properly. If done right, in the organization where you’ve got the right people, and the right processes in place to be able to integrate it into your day-to-day operations, then it’s a great thing to have.

I often tell my clients to choose wisely, though. There’s a lot of people out there trying to sell threat intelligence, as you know. Some are much better than others. So I always say to clients, “Take a good long look under the hood, and understand what it is you’re buying. Can you see the data that these things are being made on, or are you getting a report that you have no access to the underlying data on?”

But more importantly, in the right hands, it’s a massive capability and something that moves the agenda forward. In the wrong hands, in the wrong organization where they haven’t done the basics, it becomes a massive distraction. People should be doing basic things rather than chasing interesting things around the dark web. So I would say, it has a huge plus, as long as you’ve thought this through and you understand how you’re going to operationalize that capability, and what it is you want from that capability.

Dave Bittner:

Our thanks to Andrew France for joining us, and to Christopher Ahlberg for leading the conversation.

If you enjoyed this podcast, we hope you’ll take the time to rate it and leave a review on iTunes. It really does help people find the show.

Don’t forget to sign up for the Recorded Future Cyber Daily email, where every day you’ll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at

We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Amanda McKeon, Executive Producer Greg Barrette. The show is produced by Pratt Street media, with Editor John Petrik, Executive Producer Peter Kilpe, and I’m Dave Bittner.

Thanks for listening.

New call-to-action

Related Posts

A Grab Bag of Pulse Reports

A Grab Bag of Pulse Reports

June 22, 2020 • Caitlin Mattingly

Recorded Future’s Allan Liska is our guest once again this week This time, he brings a collection...

Tooling up to Protect Federal, State, and Local Governments

Tooling up to Protect Federal, State, and Local Governments

June 15, 2020 • Caitlin Mattingly

Our guest is John Zanni, CEO at Acronis SCS, a company dedicated to providing secure backup,...

Broadening Your View With Security Intelligence

Broadening Your View With Security Intelligence

June 8, 2020 • Caitlin Mattingly

Alex Noga is a solutions engineering manager at Recorded Future, and on this week’s show, he...