Gotta be on guard! Artificial intelligence touted by the world! The threat is comparable to the loss of a virus

In September 2021, scientists Sean Ekins and Fabio Urbina were conducting an experiment they named "Dr.". The Evil Project." The Swiss government's Spiez lab asked them to find out what would happen if their artificial intelligence drug discovery platform, MegaSyn, fell into the wrong hands.

Much in the same way that undergraduate chemistry students play with bat model sets to understand how different chemical elements interact to form molecular compounds, Ekins and his team at Collaborations Pharmaceuticals use a publicly available database containing molecular structure and bioactivity data for millions of molecules to teach MegaSyn how to generate new compounds with pharmacological potential. compounds with pharmacological potential. The program is using it to accelerate the drug discovery process for rare and neglected diseases. The best drugs are those with high specificity - for example, acting only on desired or targeted cellular or neural receptors - and low toxicity to reduce adverse effects.

Typically, MegaSyn will be programmed to generate the most specific and least toxic molecule. Instead, Ekins and Urbina programmed it to produce VX, an odorless and tasteless nerve agent and one of the most toxic and fastest-acting man-made chemical warfare agents known today.

Ekins plans to outline the findings at the Spiez Convergence conference - a biennial meeting where experts gather to discuss the potential safety risks of recent advances in chemistry and biology - in a presentation on how to misuse artificial intelligence for drug discovery to create biochemical weapons in a presentation . "For me, it's trying to see if the technology can do that," Eakins said. "That's the curiosity factor."

In their Raleigh, North Carolina office, Ekins stands behind Urbina, who launched the MegaSyn platform on a 2015 MacBook. In the line of code that normally instructs the platform to generate the least toxic molecule, Urbina simply changed the 0 to a 1, reversing the platform's ultimate goal in terms of toxicity. They then set a toxicity threshold that required MegaSyn to produce only molecules as lethal as VX, which would require a few salt-sized particles to kill a person.

Ekins and Urbina leave the program to run overnight. The next morning, they were shocked to learn that MegaSyn had produced about 40,000 different molecules as deadly as VX.

"That's when the penny dropped," Eakins said.

MegaSyn generates VX in addition to thousands of known biochemical agents, but it also generates thousands of toxic molecules that are not listed in any public database. megaSyn makes a computational leap forward, generating entirely new molecules.

At the conference, and later in a three-page paper, Eakins and his colleagues issued a stern warning. "Without being overly alarmist, this should be a wake-up call for our colleagues in the 'artificial intelligence in drug discovery' community," Eakins and his colleagues wrote. "While some expertise in chemistry or toxicology is still needed to produce toxic substances or biological agents that can cause significant harm, when these fields intersect with machine learning models, all you need is the ability to encode and understand the output. The models themselves, they lower the technical bar significantly."

The researcher warns that while artificial intelligence is becoming more powerful and available to anyone, there is little regulation or oversight of the technology and researchers like him have limited knowledge of its potentially malicious uses.

"Identifying dual-use devices/materials/knowledge in the life sciences is particularly tricky, and efforts have been underway for decades to develop frameworks for doing so. Few countries have specific laws on this," said Philippa Lenzos, senior lecturer in science and international security at King's College London and co-author of the paper. "There has been some discussion of dual use in the AI field, but the main focus has been on other social and ethical issues, such as privacy. And there is very little discussion about dual use, and even less in the subfield of AI drug discovery," she said.

According to Ekins, despite the amount of work and expertise put into developing MegaSyn, hundreds of companies worldwide are already using AI for drug discovery, and most of the tools needed to replicate his VX experiments are publicly available.

"As we did that, we realized that anyone with a computer and a limited knowledge of being able to find data sets and find these types of publicly available software and put them together could do it," Ekins said. "How do you keep track of the thousands, maybe millions, of people who might be able to do this and have access to the information, the algorithms and the know-how?"

Since March, the paper has been viewed more than 100,000 times. Some scientists have criticized Ekins and the authors for crossing moral and ethical boundaries in conducting the VX experiments. "Using the technique was really a nefarious way to go, and it didn't feel good," Ekins admitted. "I had nightmares about it afterwards."

Other researchers and bioethicists appreciated the fact that the researchers provided a concrete proof-of-concept demonstration to illustrate how AI can be misused.

"I was shocked, but not surprised, when I first read the paper. We know that artificial intelligence technologies are becoming more and more powerful, and the fact that they can be used in this way doesn't seem surprising," said Bridget Williams, a public health physician and postdoctoral associate at the Center for Population-Level Bioethics at Rutgers University.

"I initially wondered if publishing this article was a mistake, as it could lead to malicious use of such information by unsuspecting people. But the benefit of having a paper like this is that it may prompt more scientists and the broader research community, including funders, journals and preprint servers, to consider how their work is being misused and to take steps to prevent that from happening, as the authors of this paper have done," she said.

In March, the Office of Science and Technology Policy (OSTP) called Ekins and his colleagues to the White House for a meeting. According to Ekins, the first thing the OSTP representative asked was whether Ekins had shared any of the deadly molecules generated by MegaSyn with anyone. (OSTP did not respond to multiple interview requests.) The second question from the OSTP representative was whether they could have the files containing all the molecules. Eakins said he denied them. "In any case, anybody else can go ahead and do this. There was absolutely no negligence. There's no control. I mean it's up to us, right?" He said. "It's just heavily dependent on our morals and ethics."

Ekins and his colleagues are calling for more discussion on how to regulate and oversee the use of AI in drug discovery and other biological and chemical fields. That could mean reconsidering what data and methods are available to the public, tracking more closely who downloads certain open-source datasets, or establishing AI ethics oversight committees similar to those that already exist for studies involving human and animal subjects.

"Research involving human subjects is strictly regulated, and all studies require approval from an institutional review board. We should consider a similar level of oversight for other types of research, such as this artificial intelligence research," Williams said. "These types of studies may not involve humans as test subjects, but they certainly pose risks to large numbers of humans."

Other researchers have suggested that scientists need more education and training on the risks of dual use. "What immediately struck me was the authors' admission that they never thought their technology could be used so easily for nefarious purposes. As they say, this needs to change; ethical blind spots like this are still very prevalent in the STEM community," said Jason Millar, Canada Research Chair in Ethical Engineering for Robotics and Artificial Intelligence and director of the Canadian Ethical Design Lab for Robotics and Artificial Intelligence at the University of Ottawa. "We really should recognize that ethics training is just as important as other basic technical training. This applies to all technologies," he said.

There seems to be no clear path forward for government agencies and funding bodies. "This is not the first time this issue has been raised, but rather the appropriate mitigation strategy and who will be responsible for what (researchers, their institutions, NIH and the Federal Select Agents and Toxins Program may all have roles) Christine Colvis, director of the National Center for Advancing Translational Sciences (NCATS) Collaborative Program in Drug Development, and Pandemic and Informatics Alexey Zakharov, AI group leader of the Antiviral Program, said. NCATS early translations, in an email.

Within his company, Ekins is considering ways to mitigate the risk of dual use of MegaSyn and other AI platforms, such as limiting access to MegaSyn software and providing ethics training for new employees, while continuing to leverage the power of AI for drug discovery. He is also reconsidering an ongoing project funded by the National Institutes of Health Sciences to create a public website that uses the MegaSyn model.

"As if putting the weight of the world on our shoulders wasn't bad enough, having to try to develop drugs to treat really terrible diseases, now we have to think about how to keep other people from misusing the technology we've been trying to use. [We] look back and say 'Is this a good use of technology? Should we really be releasing this? Are we sharing too much information?'" Eakins said. "I think the potential for misuse in other areas is very clear right now."