Morgan Wright, Chief Security Advisor at SentinelOne, discusses AI’s role in combating cyberattacks, focusing on healthcare vulnerabilities and ethical challenges.
Morgan Wright, Chief Security Advisor at SentinelOne
Morgan is an internationally recognized expert on cybersecurity strategy, cyberterrorism, national security, and intelligence. He currently serves as a Senior Fellow at The Center for Digital Government, Chief Security Advisor for SentinelOne, and the chief technology analyst for Fox News and Fox Business. Morgan's landmark testimony before Congress on Healthcare.gov changed how the government collected personally identifiable information. Previously Morgan was a Senior Advisor in the US State Department Antiterrorism Assistance Program, the Senior Law Enforcement Advisor for the 2012 Republican National Convention, taught behavioral analysis at the National Security Agency, and spent a year teaching the FBI how to conduct internet investigations. In addition to 18 years in state and local law enforcement as a highly decorated state trooper and detective, Morgan has developed solutions in defense, justice, and intelligence for the largest technology companies in the world including Cisco, SAIC, Unisys, and Alcatel-Lucent/Bell Labs.
Morgan, please share a bit about your background and what led you to your current role at SentinelOne as Chief Security Advisor.
My experience in state and local law enforcement throughout the United States, including as a Senior Advisor to the US State Department Antiterrorism Assistance Program and NSA instructor has highlighted how tomorrow’s criminal activities will look nothing like today.
Much of that concerns the intersection of ‘real’ physical threats and cyber ones. When the internet became a publicly accessible asset, beyond military and academic institutions, websites acted as billboards for an organization or group. Today, everything from major infrastructure to our most personally identifiable information is stored digitally. One of the most noticeable changes is that you no longer need to be in a bank to rob it, just as you no longer need to invade a home to hijack someone’s assets.
Having testified twice before US Congress on the safety and security of large government systems, including Healthcare.gov, Cisco, Bell Labs/ALU, and my work with SentinelOne, it has become apparent that the shaky ground of ‘normalcy’ that law enforcement has experienced is about to be swept away to a world of AI-driven attacks.
What draws me most to the field is the challenge that, unlike people, AI will do away with human error and no fatigue. It’s been hard enough to defend physical assets, but now, with AI’s shapeshifting cleverness, it will take the best of us to keep everyday citizens and the digital system they rely on safe.
Healthcare institutions are increasingly becoming targets for cyberattacks, particularly in regions like India. What are the key vulnerabilities that make this sector so attractive to cybercriminals?
During the first half of this year, India’s healthcare sector has reported an average of close to 7,000 cyber attacks per month.
As if that number wasn’t alarming enough, this is all happening in tandem with a massive skill gap among cybersecurity professionals across sectors, which can see up to 30% of digital security roles unfilled at times. This lack of talent, whether in the healthcare field or elsewhere, presents a danger to patients whose doctors and caretakers need reliable information to keep the people who rely on them healthy.
For example, a lack of security positions filled in a software company that manages timesheets can create a tiny opening that threat actors can use to land and expand. An example of this was in 2018 when a casino was hacked through an intelligent thermometer sitting in a fish tank! If we think of a hospital with potentially thousands of connected medical devices and even more individuals walking the facility grounds, it becomes a true security conundrum.
Now if we take into account the realities of today’s healthcare industry, with it’s strained budgets and seemingly countless vendors, security vulnerabilities compound in a way that can have unpredictable impacts on society at large.
The payday for hackers? A single attack in a single system that missed a security patch or is beyond its serviceable life can grant access to potentially tens of thousands of records from in-facility patients and those getting services in their homes. Requiring each person to pay up could lead to a huge payoff with no guarantee to the public that it won’t happen again.
AI technology is often touted as a powerful tool in combating cyber threats. How do you see AI being used to prevent ransomware attacks, especially those similar to the one that impacted Change Healthcare? Are there specific AI-driven solutions you would recommend?
The Change Healthcare attack was a wake-up call for industry executives who sat and watched as the impact of the attack cascaded well after it was originally discovered.
The truth is that even if every single one of the open cybersecurity positions were filled, people alone are unfit to be the first lines of defense. As mentioned above, the thousands of attacks that seem to come in as relentlessly as tsunami waves need to be identified and analyzed, and their risk levels understood.
Seasoned cybersecurity practitioners are capable of conducting these activities, but we must ask ourselves two questions. First, at what point does this lose efficiency? Second, is chasing minor incidents across a complex network the best use of time for these experts? AI automates response with continuously improved effectiveness, recognizing anomalous behavior and taking action at the speed of light—already breaking down information and asking simple yes/no questions to the practitioner based on its findings.
AI makes the decision quick, faster, better– given next step options and that ability has an impact down stream on cost, security, recovery, resiliency, and operational effectiveness. However, security practitioners should consider if AI is at the core of a security process or an add-on.
For both new and legacy systems security officers should consider if AI is at the core of any security product that hey plan to onboard. Yes, you can reduce expenses by using AI as an ‘add-on’ but would you accept that anywhere else? Would you purchase a vehicle with a top-notch airbag bolted onto the steering wheel?
At SentinelOne, we believe that all of our modern products require AI to be embedded early in development so that any organization can get the most up-to-date security and functionality.
With the growing scale of cyber threats in the healthcare industry, what are the most critical steps healthcare providers should take immediately to bolster their cybersecurity defenses?
Just as with picking a security system, procurement departments must check if the medical devices they bring into their system are in line with the FDA’s Premarket Authorization Cybersecurity guidelines that were released at the end of September 2023. While enforceable only within the US market, Europe, India, and other governments have historically followed suit in order to keep Indian manufacturers globally competitive and ensure the safety of Indian patients who require medical treatment.
Beyond that, AI-embedded security tools must be leveraged to ensure that all devices and software used within a healthcare facility meet security standards. This includes:
- a. Modernize infrastructure and solutions- Look to the future of healthcare threats and prepare for what may happen in the future, not only how to protect what has happened in the past.
- b. Threat and risk discovery- New vulnerabilities are continuously being created and discovered. Only the most up-to-date systems can keep their finger on the pulse and inform practitioners of new discoveries, their risk level, and suggested next steps.
- c. Fundamentals- Multi-factor authentication and continued training are critical for all staff to protect their credentials and know when to elevate suspicious behavior under their profile.
In addition, to benefit from the collective work of cybersecurity professionals, organizations must regularly install manufacturer-recommend updates that tie up any security-related loose ends that may have made themselves known.
Healthcare institutions often rely on federal funding to recover from cyberattacks. How can these organizations balance the need for external financial aid with the imperative to invest in their own cybersecurity infrastructure?
As these attacks become increasingly prevalent, even the norm, governments will shift the financial burden of recovery from public funds to private responsibility. This means that hard decisions have to be made.
Decisions that look at the financial impact over the next quarter instead of the next five years will create long-term challenges, including having security teams continuously act in repair and recovery mode instead of acting strategically. Investing in modernizing security, such as modern network management technologies and AI-driven cybersecurity, will save money and reduce risk over time.
One example of how we are seeing the private sector force organizations to make responsible long-term security decisions is coming from the insurance industry. While government standards rely on understanding local needs within global markets, insurers want to make sure that any organization that they insure have:
- a. Properly considered their risk
- b. Playbooks for how to handle threats
- c. Backup records in a safe and reliable way
- d. Board buy-in to manage this risk long term
The increasing reliance on AI for cybersecurity in healthcare raises potential ethical concerns. What do you see as the major ethical implications, and how can healthcare institutions mitigate these risks?
Today’s AI is an extension of our human capabilities. While it can scale what people can do by processing large amounts of data across mind-boggling complex databases, it operates within parameters set by humans.
While setting up these guidelines or parameters, organizations need to consider the following:
- a. What’s the end goal? – Before throwing AI at a problem, it’s critical to understand what problem is trying to be solved. It needs to be implemented practically so it can run in the background and be a strong tool that team members want to use instead of a ‘nice to have’.
- b. Focus on protecting patient data and information– Personal Health Information and Personal Identity Information start with a strong policy. Upon implementation, organizations must deliberately apply AI to support that policy.
- c. Diagnose then prescribe– AI, just like any other system, MUST be designed with patient security in mind. What is it allowed to do, and what not?
Given the financial strain that cyberattacks can place on healthcare providers, how should these institutions prioritize their cybersecurity investments to ensure maximum protection without compromising patient care?
Healthcare providers should start by figuring out exactly what cybersecurity problems they need to solve. Without a clear plan, they might end up wasting money on ‘shiny objects’ that drain resources.
In addition, it’s important to upgrade old systems that have reached the end of their serviceable life. Not only will AI investments be less effective when trying to protect systems that lack the secure designs necessary for protection, it is also costly to maintain outdated technology– becoming more expensive in the long run than just investing in newer, safer options.
This will also benefit patient care by ensuring that the systems they rely on, whether it be physical medical devices or software programs, are being properly protected without needing to be taken offline.
The concept of moral hazard is sometimes discussed in the context of cybersecurity funding for healthcare institutions. Can you explain this concept and how it might be addressed to promote long-term security improvements?
Moral hazards arise when an organization has an incentive to take risks with the knowledge that any costs will fall on a third party. In the context of healthcare cybersecurity, this can occur when institutions rely too heavily on external funding or insurance to address cyber threats, potentially leading them to underinvest in their own cybersecurity measures.
To combat this, healthcare organizations must recognize the critical importance of investing in robust cybersecurity infrastructure. While external support, such as government funding or insurance, can help mitigate the costs of recovery from cyberattacks, relying solely on these can create a false sense of security. Institutions should instead focus on:
- Strategic Investment: Allocating resources to essential cybersecurity technologies and staff training, ensuring that investments are made based on a strategic assessment of the most significant threats.
- Vendor Responsibility: Establishing clear guidelines and expectations for cybersecurity vendors to ensure they deliver solutions that truly meet the healthcare provider’s needs and do not just add to the complexity or cost without enhancing security.
- Continuous Auditing: Regularly auditing cybersecurity practices and infrastructure to ensure they are effective and that no unnecessary redundancies or inefficiencies are costing the institution both financially and in terms of security readiness.
With the rise in cyberattacks against healthcare organizations, what role should governments play in setting cybersecurity standards and providing support to these institutions?
The future of security, whether physical or digital, will always rely on private/public feedback loops to set standards and understand how implementation takes form in practice.
When government cybersecurity standards are approved, private institutions set that as the new bar for remaining competitive. We have seen this with the US Department of Homeland Security (DHS) and the Cybersecurity and Infrastructure Security Agency (CISA) rolled out Security by Design, a standard practice to ensure a system’s overall safety and resiliency during deployment and beyond.
While financial support is always welcome to help organizations meet the new standards, the government can provide long-term assistance through threat intelligence databases and solutions, such as CISA’s Known Exploitable Vulnerability (KEV) database and training on the latest attack trends.
AI-driven insights can be valuable for upskilling employees within healthcare organizations. What are the main challenges you anticipate in implementing AI-based training programs, and how can these be overcome?
Implementing AI-based training programs in healthcare can be challenging. One major issue is sustainability—starting a program is relatively easy, but keeping it funded and up-to-date over time is much more challenging. Ensuring that patient health information (PHI) and personally identifiable information (PII) are protected in every aspect of the program.
It’s important to remember that a human element will always be involved. The goal should be to help employees improve their skills so they can automate routine tasks while still being able to make crucial decisions based on the data. AI can assist in making these decisions faster, leading to better patient outcomes.
To overcome these challenges, focus on creating training programs that empower employees to do their jobs more effectively, close the talent gap, and maintain security standards.
Stay Ahead of the Financial Curve with Our Latest Fintech News Updates!