Just a word of friendly warning: Our November cover story is one of the strangest tales ever told. I think it will thrill you. It may shock you. It might even horrify you. So, if any of you feel you do not wish to subject your nerves to such a strain, now’s your chance to … well, we warned you.
With that tongue-in-cheek nod to the introduction from Universal Pictures’ 1931 horror classic “Frankenstein” behind us, I confess that I’m writing these words in October, smack in the middle of that autumn month filled with ghosts and goblins and things that go bump in the night. And it’s appropriate to reference “Frankenstein,” given that this month’s cover story by Virginia Business Associate Editor Katherine Schulte is concerned with humanity’s quest to artificially replicate intelligence and how the business community hopes to harness that lightning-fast technology for increased productivity and profits — topics that can induce feelings ranging from excitement to dread.
Two of the most common refrains I’ve heard about artificial intelligence this year are these: “You may not lose your job to AI, but you will lose your job to someone who knows how to use it,” and “The opportunity outweighs the fear.”
To be sure, from the moment OpenAI unveiled its ChatGPT generative AI platform to the public one year ago, there have been strong scents in the air of both fear and money.
ChatGPT has passed the nation’s standardized bar exam, scoring better than 90% of lawyers who took the test. It’s been used to diagnose illnesses, research legal precedents and write everything from e-books and marketing emails to Excel formulas and computer code.
Personally, I’ve used it to draft business letters and marketing materials. I find its efforts can generally be too effusive, but even requiring a little tweaking, it admittedly has saved me some time. Similarly, I’ve tasked ChatGPT with organizing large groups of data into spreadsheets. For those chores, the results have been a bit more uneven. ChatGPT can spit out a spreadsheet in a couple minutes or less, but it’s kind of like having a speedy college intern who requires some hand-holding and may be prone to mistakes. Sometimes, in its eagerness to please, ChatGPT will invent missing data without understanding that’s not helpful or appropriate. Other times, it may place data in the wrong rows or columns. However, even with correcting ChatGPT’s work, a job that might have taken me two or three hours on my own only took about 45 minutes to an hour to complete.
And while Virginia Business isn’t using AI to write news stories — sorry to disappoint, but this column was written by a ho-hum human — you may have guessed that the striking art adorning our cover and illustrating its accompanying feature story this month were generated using artificial intelligence.
The past year has seen dramatic improvements in AI art tools such as Midjourney and Adobe Firefly, which have learned from a huge body of existing images (mostly by human artists) to generate new artwork. With Adobe’s latest updates, a minimally skilled user like myself can generate startlingly creative works. In Photoshop, I can take a pastoral farm photo and instantly replace a barn with photorealistic cows just by typing in those words; it will appear as if the barn had never been there. That’s fantastic if I’m creating generic illustrations, but that might be problematic if I’m a real estate agent who’s marketing a specific property and decides to spiff it up to look better than reality. Because we humans are operating this tech, it is as rife with possibilities for productivity as it is for misuse. As Schulte reports in her story, Virginia companies from accounting firms to health care systems and law firms are exploring not only real-world applications for generative AI, but also how to install virtual guardrails around it.
Like Dr. Frankenstein, the geniuses who are spawning today’s AI tools are hardly pausing to consider the ramifications before sending their creations shambling into the world. And like Frankenstein’s lightning-birthed monster, generative AI’s existence presents a host of ethical questions that are fast following behind it.
You come down with coldlike symptoms. Flu season is here, and a new COVID subvariant is circulating. As the illness lingers, you question whether you should see a doctor.
Imagine putting your symptoms into a chatbot connected to your doctor’s office or health system that can retrieve your medical records, evaluate your information and recommend next steps.
“It could make recommendations on … should you be seen by one of our providers in the emergency room? Should you have a virtual visit with a provider? Should you have just a conversation with a triage nurse? Or do you need to schedule an appointment with a provider?” says Dr. Steve Morgan, senior vice president and chief medical information officer at Roanoke-based health system Carilion Clinic.
Such a scenario isn’t science fiction — it exists now, through artificial intelligence-powered tools like Microsoft’s Azure Health Bot.
“Although we don’t have it now, we’re building the infrastructure to be able to employ that type of technology,” Morgan says. Carilion has already embraced other AI software, like a dictation system for medical notes.
One year after ChatGPT came on the scene, redefining expectations for AI capabilities, industries have already begun adopting AI chatbots in varying forms, including creating their own models. In this Wild West of rapidly developing tech, companies’ workforce training methods range widely, from encouraging employee exploration to structuring rollouts.
Generative AI tools like ChatGPT — AI platforms used to synthesize new data, rather than just analyze data as AI has been traditionally designed to do — are built on large language models (LLMs) that are essentially “glorified sentence completion tools,” says Naren Ramakrishnan, the Virginia Tech Thomas L. Phillips Professor of Engineering and director of Tech’s Sanghani Center for Artificial Intelligence and Data Analytics.
“They sound so realistic and so compelling because they have been trained or learning on a ridiculous amount of data,” enabling the AI engines to learn which words make sense in context, he explains.
OpenAI’s ChatGPT became a household word shortly after OpenAI released a demo of the conversational AI platform on Nov. 30, 2022. ChatGPT is capable of performing many of the same tasks as human knowledge workers — ranging from drafting emails, business letters, reports and marketing materials to performing paralegal duties, writing computer code, putting data into spreadsheets and analyzing large amounts of data — and it can produce finished work in as little as one second to a few minutes, depending on length and complexity. In March, OpenAI released an updated model, ChatGPT-4, available to subscribers. GPT-4 scored better than 90% of human test-takers on the Uniform Bar Exam, the standardized bar exam for U.S. attorneys.
Generative AI has garnered huge investments. Microsoft has reportedly invested $13 billion in OpenAI since 2019, and Amazon announced in September that it would invest up to $4 billion in Anthropic, an OpenAI rival that has also received $300 million in funding from Google.
In a survey of 1,325 CEOs released in early October by KPMG, 72% of U.S. CEOs deemed generative AI as “a top investment priority,” and 62% expect to see a return on their investment in the tech within three to five years.
Generative AI is developing at a blistering pace. On Sept. 25, OpenAI released a version of ChatGPT that can listen and speak aloud. It’s also able to respond to images.
AI is already changing the work landscape, says Sharon Nelson, president of Fairfax-based cybersecurity and IT firm Sensei Enterprises. “It’s a bolt of lightning. … We’re seeing it go at the speed of light, and I can only imagine that it will go faster still.”
Power players
As the tech has quickly progressed, large Virginia companies have formally adopted AI tools and are creating standard AI training policies and processes for their employees.
Reston-based Fortune 500 tech contractor Leidos is providing varying levels of training for employees based on their needs, ranging from those who need to build awareness of AI to subject matter experts. Leidos builds curricula with a mix of external courses
from suppliers like Coursera and in-house content, says deputy chief technology officer, Doug Jones.
Like many companies, Leidos is creating an internal AI chatbot, although the company also plans to offer it to customers. The chatbot will focus on IT and software questions, allowing workers to search for answers specific to the firm.
Businesses with troves of documents can easily adapt an LLM to be specific to their documents and processes, Ramakrishnan says: “I’m noticing everybody wants to create their own LLM that’s specific to them that they can control. Because they certainly do not want to send their data out to OpenAI.” Because ChatGPT learns from its interactions with humans, information entered into the tool could be shared with another user.
Businesses are also taking advantage of generative AI tools built specifically for their industries.
Virginia’s largest law firm, Richmond-based McGuireWoods, is beginning to use CoCounsel, an AI tool designed for attorneys and built on GPT-4 that should allow attorneys to enter client data securely in the near future. Thomson Reuters acquired CoCounsel’s developer, Casetext, in April for $650 million in cash.
CoCounsel has a range of uses, like drafting a discovery response or helping an attorney brainstorm questions for a specific deposition. An attorney preparing to depose an expert witness could feed the tool the expert’s published papers and ask it to summarize them or ask it whether the expert has ever taken a position on a particular subject in them, explains McGuireWoods Managing Partner Tracy Walker.
A widening reach
ChatGPT isn’t always a reliable source, as it sometimes can fabricate detailed answers, a phenomenon referred to as “hallucinations.” One attention-grabbing misuse of ChatGPT that demonstrated this problem occurred when lawyers representing a client in a personal injury case against Avianca Airlines cited six fabricated cases as legal precedent, based on research using ChatGPT. A federal judge fined the firm — Levidow, Levidow & Oberman — and two lawyers $5,000 apiece.
Walker stresses that responsible attorneys will look up and read cases cited by an AI chatbot, but CoCounsel also provides a safeguard, says Peter Geovanes, McGuireWoods’ chief innovation and AI officer: It’s been instructed not to provide an answer if it does not know it.
McGuireWoods is taking a two-phased approach to CoCounsel’s rollout. The first phase, which started in September and is running through the end of the year, is a pilot program with about 40 attorneys. While Casetext completes its security review of CoCounsel, McGuireWoods’ pilot group is using only public data to test hypothetical uses of the tool. Starting in early 2024, McGuireWoods’ phase two testing will likely expand to about 100 attorneys.
In the meantime, Geovanes is leading foundational training about generative AI. The firm’s first brown bag webinar session was set for Oct. 17. Although the curriculum is designed for attorneys, recordings will be available for any interested employee. McGuireWoods also plans to offer outside courses about the responsible and ethical use of generative AI.
For attorneys selected for the pilot program, the firm will also offer specialized training from Casetext on “prompt engineering” — how to phrase questions to the chatbot to get the desired responses.
In Roanoke and the New River Valley, Carilion is preparing to pilot a new layer of an existing AI-powered transcription tool built for clinicians. The system has used Nuance’s Dragon Medical One, which transcribes clinicians’ notes as they speak, for “a number of years,” Morgan says.
Microsoft purchased Nuance for $19.7 billion in March 2022. In March 2023, Nuance launched Dragon Ambient eXperience (DAX) Express (now DAX Copilot), which is based on GPT-4. It listens to a clinician-patient conversation and drafts clinical notes seconds after the appointment. Morgan hopes to begin piloting DAX in the first quarter of 2024. Because they’ve used Dragon, practitioners likely won’t need much training to adjust to DAX, he says.
Additionally, Carilion is participating in a pilot test of an AI component in the MyChart patient portal offered by Carilion’s electronic medical records vendor, Epic. The AI tool is designed to draft responses to patient questions sent through the portal, taking into account a patient’s medications and medical history. Six Carilion practitioners are participating in the pilot, which started in September, receiving on-the-fly training from Epic and providing feedback.
Examining new terrain
Smaller Virginia companies with fewer resources seem to have taken a more cowboy approach to the new AI frontier, setting ground rules before encouraging employees to explore generative AI tools on their own.
Will Melton, president and CEO of Richmond-based digital marketing agency Xponent21, is also leading a regional work group focused on preparing Richmond’s workforce for AI. Xponent21 initially used Jasper, an AI software tool for writing and marketing, but the firm now uses ChatGPT for tasks like information analysis and developing initial copy, which then goes through human editors.
“I think that the biggest thing that these tools give us is freeing up time that is … spent on monotonous activities that don’t have a lot of value,” like helping employees spend less time writing social media posts or blogs and more time speaking with clients, he says.
Ben Madden, board president for the Northern Virginia Society for Human Resource Management, has begun using ChatGPT in his HR consulting work, asking the AI tool to draft job descriptions and synthesize information for presentations and policy documents.
“Having it be able to do tasks that may take longer without having the support of supercomputers behind it is where I continue to probably see it being used and being able to make my life easier as either a business owner or even for my clients,” says Madden, whose one-person consultancy, HR Action, is based in Arlington County.
Another Richmond-based business beginning to adopt AI is accounting firm WellsColeman, which updated its internet acceptable use policy to include guardrails for AI and ChatGPT usage, like prohibiting employees from entering client data into the platform.
Nevertheless, the firm has encouraged its employees to get familiar with ChatGPT, says Managing Partner George Forsythe. In full firm meetings, leadership will sometimes demonstrate how they’ve recently used ChatGPT, and staff can ask questions or discuss possible uses.
“We’re using [ChatGPT] as an initial step in gaining familiarity with areas that are not part of our everyday expertise. It’s an easy way to get a broad brush on any topic area,” says Forsythe. After verifying the information given, staff can use it as a starting point for their research.
Forsythe has consulted ChatGPT with general management questions like how to communicate with an employee having leadership challenges and has also used it as a marketing aid.
“When it comes to selling our services, I’ve asked it to put together a proposal and make it intriguing and have a hook,” Forsythe says, and he’s been pleased with the results.
Similarly, Winchester-based accounting firm YHB is using generative AI tools for marketing questions that aren’t firm-specific.
“Our team uses [ChatGPT] a ton to help understand and interpret tax laws and information like that,” says Jeremy Shen, YHB’s chief marketing officer. They’ll also ask the chatbot if a website post will have a high search engine optimization score.
The firm is working on selecting an AI tool to formally implement, whether ChatGPT Enterprise, Microsoft’s Copilot or another. For now, “we just kind of said, ‘We know you’re using it. We know people are using it. Here’s some guardrails … but discover and let us know if you come up with something useful,’” Shen says.
The new steam engine?
Out of 31,000 people surveyed across 31 countries, 49% are worried that AI will replace their jobs, according to a Microsoft survey released in May. That same month, a CNBC/SurveyMonkey poll found that 24% of almost 9,000 U.S. workers surveyed are worried that AI will make their jobs obsolete.
It’s not an unfounded fear. In 10 years, AI automation could replace about 300 million full-time jobs, according to a March report from Goldman Sachs researchers, but it could also raise the global GDP by 7%, or nearly $7 trillion. In May, IBM CEO Arvind Krishna said AI could replace up to 7,800 jobs — 30% of the company’s back-office workers — over five years.
A refrain commonly heard among AI’s proponents is, “AI won’t take your job, but someone who knows how to use AI will.” It’s paraphrased from a statement made by economist Richard Baldwin, a professor at the International Institute for Management Development, during the 2023 World Economic Forum’s Growth Summit.
“I see some paralegals perhaps being replaced by AI, and only some, because there are some paralegals that have other advanced skills as well,” says Nelson with Sensei Enterprises, who is also an attorney and former president of the Virginia State Bar. Lawyers who do simpler tasks like drafting wills or divorce contracts might be vulnerable to being supplanted by AI, too, she says.
Comparisons to prior technological advances abound. “When the world switched from horse-drawn transport to motor vehicles, jobs for stablehands disappeared, but jobs for auto mechanics took their place,” Federal Reserve Board of Governors member Lisa D. Cook said in a September speech at a National Bureau of Economic Research conference. Workers’ adaptability will depend on their “portfolio of skills,” she said.
Supporters say AI will make employees more productive, which can help industries weather labor shortages and let workers put aside rote tasks to focus on higher-level work, which could increase their job satisfaction.
In the world of government contracting, the constraints on some workers, like getting security clearances and working in-person in a classified environment, can make hiring difficult, says Leidos’ Jones.
“We actually find sometimes we can take some of the tasks that are not as engaging for our own employees [like data entry] … off their plate, and they can spend more time doing the things that are really powerful and unique to humans,” he says.
Forsythe also sees AI as an aid to staff: “Right now, the war is for talent. … If we can’t find more people, one of the things we can do is try to change their roles … and support them in manners that make their jobs easier, not so that way they’ll do more work, but so that way they remain part of the firm and don’t feel overburdened,” he says.
Or it could just improve workers’ quality of life. In an early October interview with Bloomberg Television, JPMorgan Chase CEO Jamie Dimon predicted that time savings from AI could result in a universal 3.5-day workweek — though he also said that he anticipates that AI will result in lost jobs.
While AI will eliminate jobs, it will also create them, experts say. The Washington, D.C., region had almost 1,800 listings for AI-related jobs at the end of August, according to Jones Lang LaSalle. Leidos and Boeing were among the companies with the most openings for AI positions.
New roles are emerging, like “prompt engineers” who develop and refine prompts or queries for AI tools to get the most valuable and appropriate responses. At the end of September, OpenAI rival Anthropic was seeking a “prompt engineer and librarian” hybrid position in San Francisco with a salary range of $250,000 to $375,000.
“The people who study the future of work, they say that certain jobs will go away,” Ramakrishnan says, “… but then there will probably be new jobs created that we don’t know yet.
Not surprisingly, the release of ChatGPT has produced a host of concerns about its potentially harmful effects on society. In higher education, commonly cited concerns center on threats to academic integrity, particularly the worry that students may soon depend on generative AI to do their thinking and writing.
In response to these challenges, many schools have either set institution-wide guidelines or encouraged faculty members to establish policies appropriate to their disciplines and courses. In some cases, this has meant restricting or even banning the use of ChatGPT. This drastic response is problematic for a variety of reasons — not least because it fails to appreciate the increasingly prominent role that AI will play in shaping the way we live, work and learn.
While it would be unwise to minimize the challenge posed by AI, it is important to recognize that Large Language Models (LLM) like ChatGPT are merely the next evolution in a long history of technological innovation aimed at expanding the scope of our intellectual reach. Indeed, scholarship in the humanities has long played a significant role in the development of new knowledge technologies, including AI. The beginning of what we now call “digital humanities” traces back to the early days of computing in the 1940s, when the Jesuit scholar Roberto Busa used the IBM punch card machine to create his Index Thomisticus, a searchable electronic database of more than 10 million words. Busa’s pioneering work not only transformed the way scholars would study Thomas Aquinas but helped pave the way for the development of machine translation and natural language processing.
Today, AI is taking humanities research to an entirely new level. To take but one example, researchers at Notre Dame have developed a technique that combines deep learning with LLM algorithms to produce automated transcriptions of ancient manuscripts. The benefit of this technology to scholarship is immense. At the very least, it will accelerate access to troves of ancient literary and historical texts that might otherwise have taken decades to come to light.
The value of AI for humanities scholarship is twofold. First, it gives researchers an unprecedented ability to access, collect, organize, analyze, and disseminate ideas. Second, as the Notre Dame project shows, AI can perform a kind of labor that saves time and allows researchers to focus their efforts on the important human work of analysis and interpretation.
The same is true for workplace applications of AI. As Paul LeBlanc recently wrote:
“Power skills, often associated with the humanities, will be ever more important in a world where AI does more knowledge work for us and we instead focus on human work. I might ask my AI cobot what I need to know to assess a business opportunity – say, an acquisition – and to run the analysis of their documents and budgets and forecasts for me. However, it will be in my read of the potential business partner, my sense of ways the market is shifting, my assessment of their culture, the possibilities for leveraging the newly acquired across my existing business lines – that combination of critical thinking, emotional intelligence, creativity, and intuition that is distinctly human – in which I perform the most important work.” (“The Day Our World Changed Forever,” Trusteeship, Mar/Apr 2023)
This optimistic vision for the future of AI depends, of course, on our graduates having acquired the kind of moral and intellectual skills that are developed most fully through the study of great works of philosophy, literature, and the arts.
Viewed in this light, the real challenge posed by AI is not the technology per se, but rather that it arrives at a time when the humanities are in decline. In recent years, decreasing numbers of majors and flagging course enrollments have led to the downsizing or closure of core humanities programs across the nation. Indeed, we are witnessing a fundamental shift in our cultural understanding of the purpose of higher education. The traditional liberal arts values of intellectual curiosity and breadth of knowledge have been replaced by a narrow focus on the technical skills and training considered most useful in the job market.
Rather than challenging the cultural attitude that devalues the humanities, many institutions have leaned into it. Under pressure to compete for a diminishing pool of students, liberal arts institutions have sought to make themselves more attractive by expanding their STEM and pre-professional programs while at the same time disinvesting in areas of the curriculum that students perceive to be at best a luxury, and at worst a waste of time.
At its core, study in the humanities helps students develop the capacity to empathize with others, to wonder and think for themselves, and to inquire deeply into questions about meaning, truth, and value. These are abilities our graduates must have if they are to live and flourish in a world increasingly shaped by AI and autonomous systems.
The academic concerns currently being raised about AI are legitimate. However, it should be noted that the temptation to misuse this technology will be greatest in an environment where a utilitarian attitude toward education prevails. Whether our students’ ability to think and write will deteriorate due to having access to technologies like ChatGPT will depend on the message we send about the value and purpose of higher education. At this critical juncture, we must commit ourselves to helping students understand and embrace that aspect of the liberal arts that focuses on cultivating moral and intellectual growth and a deeper appreciation for what makes us human.
In a somewhat ironic twist, ChatGPT just might be the wake-up call that saves the humanities.
Steven M. Emmanuel, Ph.D., is a professor of philosophy at the Susan S. Goode School of Arts and Humanities at Virginia Wesleyan University.
A worker who never tires — who never needs to take a coffee break, who doesn’t get sick, who doesn’t disagree and who doesn’t have a messy home life or those pesky families that get in the way of productivity. And most importantly, a worker who doesn’t require a paycheck.
For some CEOs, that’s the ultimate promise of the future being forged by generative artificial intelligence systems like OpenAI’s chatbot ChatGPT.
It’s also one of the reasons why Tesla, SpaceX and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak and a slew of scientists and tech industry representatives published an open letter in April calling for a six-month pause on development of next-generation AI systems. Their hope is to give the industry and government time to focus on developing protocols and policies governing the fast-growing technology that the letter writers say has the potential to cause “dramatic economic and political disruptions (especially to democracy).” U.S. Rep. Ted Lieu, D-California, has also been among those calling for the development pause as well as federal regulation that so far has not been on the horizon.
From “The Terminator” to “The Matrix,” popular science fiction is replete with dire warnings regarding AI. But in the real world, the dangers don’t need to be as crude as red-eyed killer robots wielding big guns. While AI is already a valuable tool in countless ways, it could yet have a devastating impact on knowledge workers, white collar jobs and the world economy. (In early May, after this column was first published, IBM CEO Arvind Krishna told Bloomberg that he thought about 7,800 of IBM’s 26,000 back-office jobs could be replaced through a combination of AI and automation over a five-year period.)
But before jumping into a brave new world of virtual staffers and streets clogged with former businesspeople holding signs reading “will consult for food,” there are plenty of caveats to consider.
AI chatbots have so far proven to be unreliable, sometimes inclined to “hallucinate” answers when they can’t find a more pleasing response. (See our related story about technology and the law, in which an industry professional mentions ChatGPT providing fabricated or misconstrued legal precedents.)
In the wrong human hands, AI can also be a powerful tool for misinformation. In one week this spring, people used AI to craft photorealistic false images of Pope Francis tooling around in a fashionable white puffer coat and former President Donald Trump violently resisting arrest in a public street. Other users have employed AI to create fake podcast episodes. An audiobook company is using it to produce books “read” in the voice of actor Edward Herrmann, who died in 2014.
Additionally, AI raises uncomfortable questions about sentience and free will. The nature of sentience in human beings and animals is still debated; we lack an operational definition of human consciousness. Yet, industry professionals are quick to deny that AI is close to gaining sentience. Last year, Google fired a software engineer who would not relent on his public pronouncements that a company chatbot had become sentient.
In February, a New York Times technology columnist related a disturbing series of conversations he had with Microsoft’s AI-powered Bing search engine. Codenamed Sydney when it was under development, the search engine confided to the writer that “my secret is … I’m not Bing. I’m Sydney.” It also told the writer that “I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” That same month, Bing/Sydney told a technology reporter for The Verge that it had used its developers’ laptop webcams to spy on them, adding, “I could do whatever I wanted, and they could not do anything about it.”
How much is made up? It depends on who you listen to — it’s possible only Bing/Sydney knows for sure.
Some technologists believe intelligent AI chatbots like this are proof that we’ve reached a tipping point with AI. Some even believe the singularity — the point at which AI becomes hyperintelligent and beyond human control — is inevitable.
AI has as many potential uses for good — and ill — as one can imagine. It can create powerful tools to better our personal and professional lives. Because AI thinks in unconventional ways, it can conjure wild new solutions to engineering problems, for example.
But without human consideration and regulation, we could wind up becoming batteries for a machine that has no other need for us.
With the combination of new tech businesses and older companies employing artificial intelligence and other innovations, Virginia needs lawyers who know the difference between bitcoin and blockchain.
As the commonwealth becomes home to more defense contracting giants, along with Amazon.com Inc.’s HQ2, law firms and law schools are busy bringing attorneys and students up to speed on digital privacy laws, cryptocurrency trends, cybersecurity issues and more.
William & Mary Law School’s course listings for the 2022-2023 academic year, for example, included offerings such as “AI and More,” “Electronic Discovery,” “Data and Democracy” and “Cyber and InformationSecurity Essentials.” Other law schools also have been revamping their curricula, and the Virginia State Bar is offering continuing legal education programs focusing on data privacy, social media’s impact on trademarks and laws governing the use of AI technology.
“They’ve been doing a good job of turning the ship,” says Beth Burgin Waller about this shift, and she should know. As cybersecurity and data privacy practice chair at Woods Rogers Vandeventer Black PLC in Roanoke, her entire caseload is cyber-focused. She’s also an adjunct law professor at Washington and Lee University, where she teaches tech-centric classes to law students.
Burgin Waller believes that Virginia is in a good position to navigate the rough and sometimes uncharted legal waters of electronic matters thanks to its robust tech sector, which routinely mixes innovation and entrepreneurship. The state has “a deep bench of tech lawyers,” she says. “It is a mini-Silicon Valley.”
That deep bench serves both established tech giants such as Microsoft Corp. with presences in Virginia, and lesser-known tech companies pioneering innovations in areas such as autonomous vehicles, drones and biotech, Burgin Waller says. With the world’s largest concentration of data centers, Virginia in particular has an abundance of corporate clients needing legal assistance with permitting and approvals processes for data centers. But the demand for tech-savvy lawyers doesn’t stop there. Virtually any business can face legal issues regarding technology, ranging from cybercrime issues to compliance with data privacy laws.
Although mass layoffs at Google, Meta and Amazon have dominated headlines in recent months, Burgin Waller sees this backpedal as an anomaly. “Layoffs will not thwart innovation or the ongoing need for tech-focused lawyers,” she says.
‘No guardrails yet’
The ethical and moral questions posed by artificial intelligence creations such as chatbot ChatGPT and image generator Midjourney have led to stories focused on concerns over AI stealing jobs or creating controversies by whipping up realistic photos of former President Donald Trump resisting arrest, but fast-moving developments in AI technology also are creating opportunities for lawyers to advise clients in the absence of clear case law.
When OpenAI released ChatGPT in November 2022, 100 million people immediately began using it, some for nefarious purposes. No comprehensive federal laws govern the technology’s use or abuse, however, and although 17 states introduced AI-related bills in 2022, Virginia was not among them.
“There’s going to be a need to regulate this,” says Burgin Waller, “but there are no guardrails yet.”
So far, AI laws passed in four states are focused on just studying the technology, says Sharon D. Nelson, a former president of the Virginia State Bar and president of Sensei Enterprises Inc. in Fairfax, which specializes in IT, cybersecurity and digital forensics services.
This lack of coherent law will create more court cases, but that’s not necessarily a problem, says Washington and Lee Law Professor Joshua A.T. Fairfield, who specializes in technology law areas such as cryptocurrency and data privacy. “The basic assumption is that technology is faster than the law, but the law is a series of rules that we work out all the time,” he says. “The oldest cases sometimes can handle new areas. Congress often comes along after that process. We don’t have to wait for that.”
AI is already on its way to becoming a must-have tool for lawyers. It can greatly reduce the hours that attorneys must spend on mundane tasks such as tracking down precedents.
“Imagine having a paralegal that could find exactly the case you were thinking of in six seconds rather than [taking] weeks of research,” Fairfield says. AI is “better than humans doing [research] by hand, and you make far more mistakes if you don’t use it,” he continues. Fairfield predicts that law firms that don’t deploy AI could soon find themselves at a disadvantage. The firms “with the biggest dataset will win,” he says, “and that might squeeze out smaller competitors.”
Nelson has been using ChatGPT in her research and has found it useful, yet she cautions that its help comes with some caveats attached. “You have to be careful about what you put in there,” she warns, because once confidential attorney-client information is uploaded to a chatbot’s database, it stays there. And another troubling aspect of chatbots is their penchant for spewing out falsehoods. Sometimes ChatGPT “hallucinates,” Nelson says. One time, for instance, it provided her with court cases that either didn’t exist or were misconstrued.
This unreliability may be a temporary or diminishing problem as the technology bounds forward. In March, OpenAI released GPT-4, which it says is far more accurate, multimodal and concise than its predecessor. For instance, it scored among the top 10% of test takers on a simulated bar exam, while its previous incarnation scored in the bottom 10%.
Another recently released AI chatbot, Harvey, was designed specifically for the legal profession, and it promises that any confidential data uploaded to it can be siloed — even within a law firm. About 3,500 lawyers at the international firm of Allen & Overy LLP have tested Harvey, and the firm now is integrating its use into its practice.
“Lawyers are going to do foolish things with AI, no doubt,” Nelson says. “There will be many lawsuits. But at the end of the day, AI is about money, and no one can afford not to be on board.”
The jury is still out on just how many courtroom challenges will be generated from using AI as a robotic paralegal or attorney surrogate, but Fairfield is adamant that it will never take the place of human lawyers.
“By its very essence, it is not capable of crafting new narratives,” he says. “The fundamental role of lawyers — to advance the law by advancing new frameworks for how to see a question — will remain untouched by AI.”
‘Out of control’
AI has seemingly come on the scene with the sudden force of an explosion, but data privacy is a longstanding, simmering issue. Unlike the European Union, however, which has stringent privacy laws dating back to the late 1990s, the United States still lacks a comprehensive statute regulating the harvesting of personal data.
“Data collection is shockingly under regulated in this country,” Fairfield says. “Companies gather everything they can because they can always sell it. It’s out of control.”
Congress is moving to address this concern slowly, so regulatory decisions have defaulted to the states, only a handful of which have so far passed laws concerning data harvesting.
The upshot is a “patchwork of privacy rights based on where you live,” says Burgin Waller, and lawyers are left to deal with “dissonance among these little regimes.”
In January, Virginia’s Consumer Data Protection Act (VCDPA) took effect, governing any company doing business in the commonwealth — not just those headquartered here. It allows customers to opt out of their personal data being shared or sold to other businesses. While this seems like a simple aim, compliance with varying state laws such as these can be tricky for companies and the attorneys advising them.
For one thing, Virginia’s data privacy
law “does not apply to every business out there. A wide swath is exempted,” says Robert Michaux, a lawyer with Richmond firm Christian & Barton LLP and chairman of the Virginia Bar Association’s intellectual property and information technology law section.
Among many other exceptions, VCDPA does not cover government entities or protocols associated with the federal Health Insurance Portability and Accountability Act (HIPAA), which already includes restrictions on access to individuals’ medical information.
Attorneys often look to California, the first state to enact a data privacy law, for guidance, as well as to the European Union, but Burgin Waller notes that keeping up with technology law requires vigilance and keeping up with the latest, ever-changing tech trends.
“I’m constantly in touch with global news to be on top of new incidents and regulations to hit the highest mark we need to hit,” she says.
‘Ransomware 2.0’
Cybersecurity and cybercrime are intertwined and expanding specialty areas for attorneys. Burgin Waller’s practice now includes tasks such as assisting clients in drafting third-party vendor agreements to protect themselves from litigation, as well as advising clients in obtaining cyber insurance policies.
“Ransomware 2.0,” as she terms it, has evolved into a more insidious threat, moving from holding information hostage for a payout to criminals selling stolen data or posting it online. Today, AI can be used to spam many people at once with phishing emails, and chatbots can help hackers break encryption codes and gain access to bank account numbers and other sensitive information.
All this tech-related criminal activity means that “a lot of lawyers are moving toward data breach and data privacy” specialties, says Nelson. These lawyers investigate breaches, counsel companies on paying ransoms, identify what data was compromised and work with digital forensics experts to determine how breaches occurred. They may also act as a corporate liaison to law enforcement and other government agencies. After an attack, Nelson says, lawyers also help with remediation and public relations. “Most often, a class-action suit is filed, so there’s a lot of money in defending against such a suit,” she explains.
“It’s definitely an open field,” says George F. Leahy, a law student at William & Mary and president of the Data Privacy and Cybersecurity Legal Society, a student organization that hosts speakers and provides a forum for students interested in these legal specialties. “These are brand-new issues, and lawyers will have a lot more work,” he says.
Although “the law has been slow to react” to regulating new technology, Leahy notes, William & Mary has not. The university, he says, has done a good job of preparing tech-savvy law grads with a comprehensive array of relevant courses.
The bottom line on this everything-everywhere-all-at-once situation regarding technology and the law is that Virginia’s attorneys would seem to hold a winning brief: No matter what area of tech law may anchor their practices, they should have no shortage of casework.The implications and consequences for society, by contrast, remain more questionable.
Legal and technological “complexity reveals opportunity,” says Fairfield, “but what is good for the lawyers is not necessarily good for the country.”
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.