How Cybersecurity Can Keep Pace with Digital Science and AI Capabilities
By Rachel Jenkins
Digital science aims to improve old processes with new and innovative technologies, such as artificial intelligence (AI) and machine learning. While this shift in the healthcare industry is exciting, practicing radical honesty regarding the intersection of AI cybersecurity and digital science innovation is critical. This post maps out our current cybersecurity landscape and explores concerns regarding digital science staying cyber-safe.
Mapping Out the Cybersecurity Landscape
The global pandemic undoubtedly changed the cybersecurity landscape as we know it. According to Deloitte, data breaches impacted more than 500 million people globally between February and May 2020. Cyberattacks increased as employees began working remotely due to mandatory stay-at-home orders. Strangely enough, nearly 47% of workers in the tech industry admitted to clicking on a phishing email at work.
With the dramatic increase in cyberattacks against businesses during the pandemic, we waved goodbye to the previously “safe” digital world. Cybercriminals became more sophisticated, launching multi-tiered attacks. Simultaneously, the tech sector worked diligently to develop innovations to combat online invaders from hacking sensitive data.
Unsurprisingly, the education and research industries took the hardest hit by cyberattacks, with healthcare and government trailing closely behind. Many of the most deeply impacted spaces have responded by adopting new technologies, such as AI.
AI Spurs New Drug Development
Pharmaceutical companies, like Pfizer, have begun to rely on AI more consistently, aiming to speed up and improve the clinical development process. This approach only makes sense as AI can assist pharmaceutical companies in getting drugs to market more quickly.
In addition to its outstanding gene-sequencing job, AI is being trained to foresee therapeutic efficacy and adverse effects. Plus, AI can handle the enormous volumes of paperwork and data required to back up any pharmaceutical product.
Terabytes of data are generated throughout each stage of a new drug’s development and testing. This new universe of data may hold novel insights that medication makers have not previously had access to. The core concept of what we currently refer to as AI, machine learning, shines in this scenario even though it necessitates conducting complex math on enormous amounts of data.
AI can revolutionize drug discovery in multiple ways, such as:
- Faster drug development: The convergence of biology, technology, and drug development can accelerate the creation of better drugs. One-tenth of the conventional timeframe of more than ten years for drug research and development can be attained. Using the skills created in Silicon Valley and the surrounding tech ecosystem can benefit patients more.
- Personalized medicine: A centralized and secure data archive system can gather health data from many sources, including wearables, digital health records, and research. This information helps identify the treatment that would be effective for a specific person and the particular drug, timing, sequence, and dose tailored to their requirements.
- Augmenting scientists’ expertise: AI technology allows scientists to work more quickly and effectively. It can produce insights that people would have needed help to come up with. By using machine learning, scientists can move more rapidly and generate new ideas, accomplishing tasks that in the past would have required far bigger teams and more time.
- Shift towards automation and hypothesis development: Scientists can focus more on creating hypotheses and planning the following tests according to the implications of the data by using AI to automate manual processes like pipetting and data curation. The speed with which AI can process and analyze enormous volumes of data helps scientists come up with fresh discoveries.
How Cyber and Science Intersect
Product manufacturing and the relationship between people and machines are being reinvented in the age of digital science. The adoption and use of digital technology can potentially transform humans’ role in production by boosting productivity in ways that were not previously possible.
Humans and Machines
Jobs are the engines of growth and the foundations of resilience for individuals. Unsurprisingly, many fear the threat of technology, concluding that new tech innovations can cause job loss and employment structure changes. Even though the risks are present, estimates of how digital technology will affect employment range considerably, from a significant loss of both trained and unskilled jobs to potential job and revenue gains due to the cooperation of humans and machines.
Cybersecurity and Ethics
Aside from the human and machine relationship, another concern exists in cybersecurity. While humans are undoubtedly a primary cause of most cyberattacks, implementing such innovative technologies causes fear. Can digital science genuinely remain cyber-safe? How can AI function in a world fraught with hackers waiting for signs of vulnerability?
For example, many question whether AI conforms with the HIPAA Privacy and Security Rules regarding healthcare data — especially with HIPAA’s recent regulatory restructuring.
Furthermore, the adoption of AI in healthcare causes some to worry about the ethics of the technology. According to Dataiku’s 2020 poll, ethics are the main organizational issue impeding the adoption of AI in healthcare. Although exact problems vary depending on the organization, these concerns are usually grouped into four categories: informed permission to use data, safety and transparency, algorithm fairness, and data confidentiality.
These issues are not exclusive to the US or the healthcare sector. Around the world, governments and regulatory bodies have struggled to address this issue; several have introduced laws and regulations to control how AI is used. The dilemma is somewhat discussed in the United States by amalgamating state and federal legislation, but many issues remain.
Pacing Risks with Opportunity
In digital health, utilizing AI necessitates striking a delicate balance between welcoming innovation and controlling risk factors. This delicate balance starts with information since leaders must be aware of their work environment.
First, it’s critical to understand the risks associated with implementing AI. These comprise privacy-related hazards, security lapses, and moral concerns. Leaders who are knowledgeable about these issues can put together practical plans to reduce risks and protect patient data, beginning with this five-step risk management process:
- Identify Risk: Pinpoint exposures the company faces.
- Analyze Risk: Determining how much a specific loss would cost.
- Evaluate Risk: Understand the likelihood of particular risks.
- Track Risk: Map out vulnerability patterns in the company.
- Treat Risk: Decide whether to avoid, transfer, mitigate, or accept the risk.
Keeping up with the most recent developments and advances in AI technology is crucial. With this information, executives can use AI’s potential to boost diagnoses, increase healthcare delivery, and streamline administrative procedures.
Regardless of their size, stage of development, or sector, all businesses confront risks. As mentioned earlier, tech leaders often face more challenges than others. Leaders must handle risks strategically, especially when merging new tech, like AI, into old processes. However, it’s worth the challenges when fresh-faced tech innovations can steer the healthcare industry in the right direction.
RISE Award 2022 winner, Rachel is one of the premiere leading insurance professionals with almost 10 years of experience in the industry. After graduating from University of Pennsylvania Wharton School of business, she spent time as an underwriter at AIG and as an FI broker at Marsh. She has been with the Founder Shield team for several years focusing on client advising and improving policy language for venture-backed companies including financial institutions, healthcare, ecommerce and SaaS companies.