Modern Security Newsletter #004 – October 2023
The autumn season of the Modern Security Community has kicked off with a bang, so it’s time for another update. Since launching in January, we’ve hosted 12 live events with 12 guests and published more than 400 minutes of video, which have been collectively attended or watched more than 1000 times. I’ve had some fantastic feedback about how useful some of these seasons have been, and I’ve really enjoyed hosting them. I’ve learnt an awful lot on the way, and I hope you have to, but I need your help to make the community worthwhile for everyone. Please see below for one simple thing you can do today to help.
In this month’s edition, we’ll cover the usual roundup of what’s coming next and what I’ve recently published, as well as some highlights from other news and, of course, we’ll finish with something just for fun.
🔜 Coming Up in the Community
- Voice Biometrics Myth Busting – I’m really looking forward to my conversation with Brett Beranek from Microsoft/Nuance next week. We’ve both been involved with Voice Biometrics for more than a decade and when you are so close to a subject, it’s not always easy to appreciate just how much progress has been made in that time. We are going to be addressing some common myths and misconceptions about implementing Voice Biometrics, so this is a perfect session if you’ve been thinking about the technology for a while but are still undecided. Join us on 19 Oct 📅.
- Defining the Business Case – Organisations have no shortage of things to spend their money on, so making the case for improved security requires demonstrating a return on investment. Whilst the fundamentals of the financial can be pretty simple, I’m looking forward to going deeper and providing a step-by-step guide as well as introducing some new tools to help you make the case in your organisation. Join us on 9 Nov 📅.
Modern Security Community
Membership is FREE and open to all employees of organisations evaluating, implementing or optimising modern security solutions. Members are free to participate and share as much or as little as they want.
👥 Recent Community News
- Is Apple’s New Personal Voice and Live Speech Security Risk? – I’ll admit to being an Apple fanboi, but I now have even greater respect for their privacy principles after doing a deep dive into the security implications of the synthetic voice capabilities found in the latest releases of their operating systems. More…
- Is Mobile the Key to Better Call Centre Security? – My emerging thinking on how, for many organisations, the mobile phone will be a key component of the call centre security process. We’re a long way from that, but I explore the current challenges in this article and where I think it’s heading. More…
- Authentication for Automation – Despite huge progress in AI and Natural Language Understanding technologies, call centre automation rates haven’t really improved because to do the stuff callers really want, you need to be sure they are who they claim to be and today’s automated authentication methods aren’t up to the job. In a live event, I was joined by experts in artificial intelligence (Kane Simms from VUX.world), customer experience (Sean McIlrath from teneo.ai) and telecommunications (Tim Holve from Vonage) to dig into these challenges and discuss the opportunity that the imminent arrival of better mobile authentication will provide. More…
- Numbers – Whilst mobile authentication isn’t there yet, Network Authentication and Fraud Prevention is already proving its worth. I was joined by Abhinav Anand, who leads product development for Smartnumbers. He took us through a fascinating look at what they’ve learnt from screening millions of calls to the UK’s top banks. The fact that only 1 in 500 calls is fraudulent may sound good, but that quickly turns into thousands of calls at even a modest scale, and identifying or preventing those calls may sound daunting. He shared some easy steps for how most organisations can reduce fraudulent calls with some simple steps. More…
📰 Other News
Security and customer expectations continue to evolve. This month is no different:
- Bias and Privacy Challenges in Voice Biometrics Research – A timely academic study into the dependence on a few benchmark data sets for Voice Biometrics research. For example, several data sets are based on publicly available audio of celebrities; others are almost entirely college-age students. It is clear that this is a very real risk, although its implications and impact are as yet unclear but definitely worthy of more research. I know from my own experience that organisations need to be alive to this issue when implementing the technology. (https://montrealethics.ai/benchmark-dataset-dynamics-bias-and-privacy-challenges-in-voice-biometrics-research/)
- Contextualising Deepfake Threats to Organisations – An advisory from the NSA, FBI and CISA that provides a decent primer on the whole issue as well as documenting some cases I don’t think have been disclosed before. It quite rightly focuses on the more likely media, politics and internal risks for organisations but does touch on customer impersonation. I’ve been thinking about this issue recently and will publish more soon. (https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF)
- Humans Unable to Detect a Quarter of Deepfakes – A great study, although the headline isn’t borne out in the data, showed humans were significantly worse than algorithms at detecting deepfakes from high-quality audio samples (75% success). Interestingly, the study showed significantly better results when human responses were aggregated but no difference between English and Mandarin when participants were trained in advance or when they took more time. This experiment was fairly limited (same sentence, single speaker), so it isn’t particularly representative of the call centre use case but is a useful starting point for research. (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0285333)
- Fraud GPT – A few months ago, when drafting a white paper with Opus Research, I internally coined the phrase Fraud GPT to cover the broad set of risks that Large Language Models might create for call centres. It provides the glue between speech synthesis (not necessarily deepfake) and natural language understanding, allowing fraudsters to scale social engineering attacks to unprecedented levels. I even developed a rudimentary proof of concept that held a convincing conversation to socially engineer a password from a customer. Little did we know that real fraudsters were miles ahead of us. Fraud GPT is actually available on the dark web as a fraud as a service tool; while it’s principally aimed at Telegram/WhatsApp and Email, it just underlines how vulnerable knowledge-based credentials are to this form of attack. I’ll be writing more on this shortly. (https://hackernoon.com/what-is-fraudgpt)
🤣 Just for fun
- 🤦 Phishing 2FA 25 years ago – In this example, the hacker is attacking America Online (AOL – Does anyone remember that?). I’m not sure how fun this is, but it made me chuckle that people were doing this so long ago and that most organisations are still vulnerable to this attack. (https://twitter.com/123456/status/1710359310419607976)