Please ensure Javascript is enabled for purposes of website accessibility

Breaking the rules

For lawyers, time has always been money, which has made the billable hour standard practice for the profession since the 1960s. But with the rise of artificial intelligence, which already can cut the time required to complete rote tasks from days or hours down to seconds, a once-inconceivable event just might come to pass — the supremacy of the billable hour could come to an end.

Results — not time spent on a project — are being touted as the new metric for evaluating the services that a lawyer can offer, and more firms are at least considering if not experimenting with different payment arrangements, including retainers and flat fees, which could become the norm someday.

That day’s not here yet, though. The billable hour’s death will be a slow one, because the legal profession is ultra-cautious about taking risks.  

Lawyers are like teenagers at a school dance, says Susan Hackett, an attorney and CEO with a law practice management consultancy in Bethesda, Maryland. They line the walls of the gym and wait for someone else to hit the dance floor first before they make any moves themselves.

“Lawyers like to be the first to be second,” she says. Hackett, whose Legal Executive Leadership firm has advised big corporate clients such as Mars and Hilton Worldwide Holdings on their legal departments’ business practices, is no fan of the billable hour — particularly from the client’s perspective. 

“Does anyone buy anything else this way?” she asks. “Sure, go ahead and build the house, and we’ll figure out the cost and how many bedrooms it should have after the work gets started. Lawyers have been paid to do their jobs dysfunctionally for a long time. Why not incentivize by giving more money for resolving a case rather than more money for continuing it?” 

But looking at the issue from a law firm’s point of view, why would they change a payment system that saw an average national $370 hourly rate for associate lawyers and a $604 average rate for partners in 2023, according to a National Law Journal survey? That sounds like fixing something that isn’t broken, at least for law firms billing by the hour, and there wasn’t a good reason to change the practice until AI arrived on the legal scene with the threat of disrupting how fast legal services and advice could be delivered.

Even without AI in the mix, there have been rumblings of discontent about the prevalent legal billing model, including from in-house corporate legal departments that have expressed frustration over how expensive and unpredictable the costs of outside counsel can be, mainly because of the billable hour, according to a 2023 joint survey of corporate legal departments conducted by the Association of Corporate Counsel and Everlaw. Just 38% reported being satisfied with the ability to predict
the costs of outsourced legal work.

And yet, in-house legal departments also are reluctant to try something new, the survey showed. Only 28% of survey respondents said that they were pursuing alternative fee agreements with outside law firms as a cost-cutting strategy, while just 33% were considering tapping into AI technology for legal work. 

“Billable hours is all they know,” Ken Callander, managing principal of Value Strategies, a legal consulting firm that promotes alternative fee agreements, told Law.com in November 2023. “To move toward something other than that, it’s really foreign to them.”

Colleen Quinn, founder of Richmond’s Quinn Law Centers, says not calculating hourly billing saves her a lot of time. Photo by Matthew R.O. Brown

Dipping their toes in

Williams Mullen Chairman, President and CEO Calvin W. “Woody” Fowler Jr. has seen it all in his nearly 40 years practicing law.

“Every 10 years, we hear that the billable hour is dead, but it is very resilient,” he says. His Richmond-based law firm — the state’s third largest — has 250 lawyers, 196 of them based in Virginia, and bills most of its clients by the hour. 

Fowler expects that it will take five years or even a decade before the profession sees a significant shift away from the billable hour. First will come the early adopters, he says, who are like those teenagers brave enough to get out on the dance floor first. Then, Fowler expects that most of those hanging out by the gym wall will gradually take to the floor, although a few probably will remain wallflowers and “watch all the way to the end,” he says. Those firms “might look like geniuses — or be left behind.”

Using AI for legal work is another big leap for law firms, many of which have reservations about its security and reliability.

 For now, Williams Mullen is staying off the AI dance floor for the most part. The firm has begun using large language model-based generative AI programs like ChatGPT that can create text for marketing and internal development purposes, and leaders of various practice areas have been reading and studying up on other uses of the technology.

But right now, AI presents too many proprietary and confidentiality issues to be applied to legal matters, Fowler says. “It’s not ready for prime time yet.” 

Iria Giuffrida, assistant dean for academic and faculty affairs at William & Mary Law School, finds such prudence appropriate.

Although Giuffrida thinks that in-house LLM-based tools probably can be trusted with client information, she is less confident about the ability of public AI platforms like ChatGPT to guarantee confidentiality — and “confidentiality is sacrosanct,” she says.

AI’s liability rests with the lawyer, Giuffrida points out, and some attorneys have made well-publicized AI-related errors and paid the price.

Last year, for example, two New York attorneys were sanctioned for filing a brief with AI-generated case citations that turned out to be fake, and in February, a Massachusetts attorney was fined $2,000 for producing a document that also included fake citations caused by an AI tool’s “hallucinations.”

“AI is not a barred attorney,” Giuffrida says, so any malpractice that results from its use is the lawyer’s responsibility — and problem. At the same time, the “incredible efficiencies” that AI offers can’t be ignored, she says, and some attorneys undoubtedly will turn to AI to increase their productivity. That, in turn, will lead clients to question why they should pay a billable hour for a service, such as creating a legal document, that can be performed in less than a minute with AI’s assistance. 

Giuffrida expects that small and medium-sized firms that specialize in areas of the law that are fairly predictable, such as adoptions or real estate, will be among the first to use AI tools like ChatGPT in large numbers because AI is good at repetitive tasks.

By employing the new technology, they could punch above their weight when competing with larger firms for clients, she says. AI can take the place of a paralegal or first-year associate — whom small firms may not be able to afford — and do a lot of the grunt work involved in building some legal cases. 

James K. Cowan Jr. is chairman of CowanPerry, a small Blacksburg commercial law firm that performs work for flat fees, monthly retainers and other alternatives to hourly billing. Photo by Don Petersen

Excitement and caution

Three small Virginia law firms contacted for this story all bear out Giuffrida’s predictions concerning AI. They’re enthusiastic about the technology’s potential, but, just like Fowler at Williams Mullen, they are also in wait-and-see mode. 

James K. Cowan Jr. is chairman of CowanPerry, a six-lawyer Blacksburg-based firm that performs mostly corporate and real estate work for flat fees, monthly retainers and other alternatives to hourly billing. “We do so much work on fee that any efficiencies are great for us,” Cowan says, which is why he expects that his firm will be among the first in Virginia to employ AI technology in legal work. 

“In 10 years, CowanPerry has developed a tremendous amount of work product, but the hardest thing is to find it,” he says. “AI can help. It gives you suggestions and analysis and allows you to focus on the things that matter. You can think more about strategy, the human side.”  

Colleen M. Quinn, founder of Richmond’s Quinn Law Centers, which specializes in personal injury, employment law, estate planning and adoption and surrogacy, uses alternative fee agreements for her more straightforward cases, and she says it’s popular with clients.

“The younger generation really likes to have a set amount,” she says, and it’s a time-saver for her two-attorney firm to not have to calculate hourly billing. 

She also views AI as a potentially huge time-saver for estate planning and other legal areas with recurrent procedures and documentation, although AI tools are less useful for litigation, due to their unpredictability. “Ethically, there still has to be fact-checking behind it,” she says. 

Dunlap Law, a firm based in Richmond and Monterey that serves mostly small businesses, has jettisoned billable hours, says managing partner Tricia Dunlap.

Her five-attorney, all-female practice offers its clients varying levels of service at different price points that are “clear, precise and transparent,” Dunlap says. “We put clients in charge of their budgets, and a level of trust begins to develop.” Like Quinn, she notes that not tracking billable minutes saves her firm time.

Dunlap expects that what she calls “the superpower of AI” will shift focus away from time spent on tasks at many law firms, and that the demise of the billable hour could benefit clients in many ways, including some that are not purely financial. “The time we don’t spend drafting a contract,” she says, “we can spend learning a client’s business and helping them avoid mistakes.”

Although the Virginia State Bar declined to comment on any trends concerning billing practices or the use of AI tools among its members, the marketplace is usually an accurate barometer of change — and the market for AI legal services is hot. 

“I get five calls a day from people trying to sell me AI,” Fowler notes.

The future of law? 

One of those calls might have been from a firm like Henchman. An international legal services company based in New York and Ghent, Belgium, Henchman furnishes AI-assisted contract drafting services to hundreds of law firms in 35 countries, including many in the United States. For a subscription fee, Henchman provides clients “instant access to their previously written clauses and definitions,” making work accessible in seconds.  

Michiel Denis, Henchman’s head of growth, says his company experienced 1,300% revenue growth in the past year and a half, and it now counts among its clients the multinational Boston-based law firm of Goodwin Procter, which has 1,800-plus attorneys. Most of its business, though, as Giuffrida predicted, comes from small and mid-sized law firms.  

“It comes as no surprise that the rollout of a technology platform is slower for bigger firms, as they typically have more practice groups leveraging new technologies,” Denis explains. Key to his company’s success has been what he calls the “bespoke” training it offers on the use of its systems. “Next to privacy and reliability, we notice user-friendliness is a key sticking point for Henchman clients,” he says. 

Hackett would second that. She has seen too many law firms “jump in from doing nothing with technology to the highest level,” she says, dooming such enterprises from the get-go. Firms end up with systems that no one wants to use, which only enforces the profession’s general embrace of continuing to do things the way they’ve always been done. 

However, law firms don’t operate in a vacuum, and many have clients who are already using AI in their own businesses, Hackett notes. These corporate clients likely will expect their legal counsel to make use of similar tools that can increase efficiency and, most consequentially, lead to a revamp of how they are billed and how much those services cost.  

“You can’t have one hand clapping,” Hackett says. “You have to do it together. Yes, law firms are slow to change, but market changes are fast, and technology is fast. Law is unpredictable. So is business. Get over it.”

 

Google investing $1B in Va. data center campuses

Google is investing $1 billion in expanding its Virginia data center campuses this year and is launching a $75 million Google.org AI Opportunity Fund, one of Google’s top executives and Gov. Glenn Youngkin announced Friday at Google’s Reston office.

“Today is a great day. We’ve got a $1 billion investment in the commonwealth that we’re announcing. There’s an establishment of a new AI Opportunity Fund. And we’re creating new and opportunistic ways and pathways for people to upskill and find a new pathway to an amazing career,” Youngkin said. “That is worth celebrating.”

Google has two data centers in Loudoun County and one in Prince William County and is investing $1 billion to expand those campuses. The Prince William County data center was built in 2023, according to a Google fact sheet. A Google spokesperson did not provide additional details about the size of the campuses or expansion plans.

Counting the $1 billion investment, Google has invested more than $4.2 billion in Virginia to date, according to a news release.

“The investments we’ve made today are not only important investments in infrastructure, but they’ve also added 3,500 jobs in Virginia, and they’ve supported $1 billion of economic activity, and we look forward to continuing to build on the work that we’ve done today,” said Ruth Porat, Google and Alphabet’s president, chief investment officer and chief financial officer.

Google appears to have additional plans for Prince William County data centers. In March, the U.S. Army Corps of Engineers approved a 181-acre data center complex project application from Delaware-based Sharpless Enterprises, a shell organization linked to Google, according to InsideNova and trade publication Data Center Dynamics.

In Northern Virginia, around 300 data centers are sprawled across Loudoun, Prince William and Fairfax counties, with the majority in Loudoun. The Ashburn area in Loudoun is home to the world’s largest concentration of data centers, a zone known as Data Center Alley, which more than 70% of the world’s internet traffic comes through. The Prince William Digital Gateway, if completed as planned, would be the largest data center complex in the world.

According to the Data Center Coalition, the data center industry invested $37 billion in the commonwealth over just the past two years. One of Google’s main competitors, Amazon Web Services, invested $35 billion in Virginia data centers between 2011 and 2020 and plans to invest another $35 billion by 2040 to develop multiple data center campuses across Virginia.

Northern Virginia’s data center market is larger than the next five U.S. markets combined and larger than the next four global markets combined, Youngkin said, citing a 2023 JLL data centers report, measuring markets by megawatts of built-out critical IT load capacity. “With that comes tremendous synergy and an ecosystem that enables advanced development, and so Google’s $1 billion investment is a continued demonstration that that ecosystem is one worth investing in,” Youngkin said.

Data center opponents argue the centers strain the state’s electric grid. Dominion Energy estimates that Virginia data centers’ demand for electricity will jump from 2.8 gigawatts in 2023 to 13 gigawatts by 2038.

The 2020 Virginia Clean Economy Act created a 2050 mandate for generating electricity statewide from renewable, carbon-free energy sources. In 2022, Youngkin announced a state energy plan that endorsed an “all-of-the-above” mix of energy sources, including hydrogen, natural gas and nuclear power, in addition to the wind, solar and battery storage supported by Virginia Democrats.

“As we grow, the power demand accelerates as well, and that’s why, when we launched our all-of-the-above, all-American energy plan, it embraced the idea that we are going to have to innovate; we are going to have to accelerate in order to meet the growing demand across the commonwealth as our economy accelerates,” he said.

Google’s goal is to run its data centers entirely on carbon-free energy by 2030.

Artificial intelligence training

L to R: Mike Haynie, executive director of the IVMF and Syracuse University’s vice chancellor for strategic initiatives and innovation; Virginia Gov. Glenn Youngkin; Ruth Porat, Google and Alphabet’s president, chief investment officer and chief financial officer; and Virginia Secretary of Commerce and Trade Caren Merrick chat in Google's Reston office. Photo courtesy Google
L to R: Mike Haynie, executive director of the IVMF and Syracuse University’s vice chancellor for strategic initiatives and innovation; Virginia Gov. Glenn Youngkin; Ruth Porat, Google and Alphabet’s president, chief investment officer and chief financial officer; and Virginia Secretary of Commerce and Trade Caren Merrick chat in Google’s Reston office. Photo courtesy Google

The $75 million AI fund from the company’s philanthropy, Google.org, will give grants to workforce development and education organizations. The tech giant is also launching a certificate course, AI Essentials, to teach people to “use AI effectively in day-to-day work,” according to a news release.

“I always remind everybody that companies and people make choices. ‘Where are we going to live? Where are we going to invest? Where are we going to spend the next 50 or 100 years of our effort?’” Youngkin said. “And I’m very humbled when families and companies choose Virginia. … And it’s days like today where we’re reminded that one of the greatest companies in the world is, once again, making a decision to invest in Virginia.”

The D’Aniello Institute for Veterans and Military Families (IVMF) at Syracuse University is the first organization to receive funding from the AI Opportunity Fund. The IVMF received a $3.5 million grant to offer the AI Essentials source and the Google Cybersecurity Certificate through its Onward to Opportunity program, a free career training program for transitioning service members, veterans and military spouses.

“With this new Google AI Essentials course, we are confident that we can arm our veterans and military family members with the training and the skills they need to put [AI] technology to use realizing whatever career aspirations they have,” said Mike Haynie, executive director of the IVMF and Syracuse University’s vice chancellor for strategic initiatives and innovation.

Google’s AI Essentials course will be under 10 hours. It will be available for $49 on Courseera as well as available for free through some nonprofits, including IVMF and Goodwill Industries.

McLean AI health care startup raises $111M in Series A funding

McLean-based Zephyr AI has raised $111 million in a Series A funding round, the health care technology startup announced March 13.

The round included participation from about 30 investors, including Revolution Growth, Eli Lilly & Co., Jeff Skoll, and Epiq Capital Group, according to a news release and a Securities and Exchange Commission filing from the company. The raise started in October 2023.

Founded in 2020, Zephyr AI is leveraging artificial intelligence algorithms and tools to develop products and solutions supporting patients and providers and fueling research in the areas of oncology and cardiometabolic disease.

“The U.S. has the highest rate of avoidable cancer and cardiometabolic-related deaths among any high-income country. We must do better,” Grant Verstandig, Zephyr AI’s cofounder and executive chairman, said in a statement. “At Zephyr AI, we are harnessing the power of AI to extract novel insights to better define patient stratification and response predictions as well as improve federation of real-world data. With our world-class team, and the support of this investor group, we are deploying one of the largest clinicogenomic [clinical genomic] datasets that has unprecedented breadth across disease states and data partners. Collectively, we are now well-positioned to support our mission of democratizing precision medicine, enhancing both the speed and success of clinical trials.”

The funds raised will enable Zephyr AI to increase its analytical speed and fortify its training and validation sets, as well as support expansion of the company’s scientific and commercial teams to expedite delivery to the market.

The startup had two abstracts accepted for publication at the American Association for Cancer Research annual meeting in April in San Diego.

“We are excited to be part of this growing ecosystem of AI-enabled drug development and welcome the opportunity to attend [the annual meeting], where we will engage with the scientific community and present some of our emerging scientific insights from our platform,” Jeff Sherman, Zephyr AI co-founder, interim CEO and chief technology officer, said in a statement.

In March 2022, Zephyr AI raised $18.5 million in a seed round.

Charlottesville’s Astraea acquired by Fla. satellite company

Astraea, a Charlottesville-based geospatial analytics firm, has been acquired by Nuview, a Florida-based company developing a satellite imaging constellation, the companies announced Tuesday.

Financial terms of the deal were not available.

Founded as a for-profit benefit corporation in 2016, Astraea applies data science and artificial intelligence to analyze imaging and sensor data gathered from Earth-observing satellites.

“This acquisition of Astraea will allow Nuview to diversify and strengthen its position across multiple markets, leveraging its expanding client base and expertise,” according to a news release from the companies announcing the acquisition.

Founded in 2022, Orlando, Florida-based Nuview is developing a constellation of satellites planned for launch in 2025 that will use lidar technology to scan and map large areas of Earth terrain from space, creating 3D imaging. The acquisition of Astraea will add advanced geospatial image analysis to Nuview’s capabilities. The startup was named to Time magazine’s list of the best inventions of 2023 for its planned lidar satellite constellation. Nuview’s clients include the U.S. Department of Defense and commercial interests.

“Nuview’s lidar technology will provide centimeter-scale accuracy in geospatial intelligence and mapping that is pivotal for defense, climate initiatives, telecommunications, agriculture, energy and national mapping initiatives,” according to the news release.

Astraea CEO and co-founder Daniel Bailey said, “Nuview shares Astraea’s passion for finding solutions to some of the world’s toughest problems through Earth observation and data analytics. With only an estimated 5% of the world mapped with the accuracy that only lidar technology can bring, Nuview’s groundbreaking space-based lidar, coupled with Astraea’s platform infrastructure, is poised to offer unprecedented AI solutions and indispensable data vital for advancing global climate initiatives and sustainable development.”

When the war between Ukraine and Russia began, Astraea spearheaded an effort to provide free satellite imagery that could be used to assist Ukraine’s Ministry of Defense, as well as civilians and humanitarian organizations. In July 2022, Astraea closed an oversubscribed $6.5 million Series A round led by Aligned Climate Capital and Carbon Drawdown Collective with participation from CAV Angels, Tyndall Investment Partners and the University of Virginia Licensing & Ventures Group Seed Fund.

Slowly bot surely

From health care to real estate and law, artificial intelligence is becoming an increasingly bigger part of many industries, with executives rolling out new tools and updating policies. Like other businesses, banks and credit unions too have been exploring this electronic frontier, although they’re pairing technological progress with caution.

Even if you’re new to the topic, you probably have heard of ChatGPT, the trailblazing generative AI chatbot launched by OpenAI in November 2022. It was a big deal, gathering more than 100 million monthly users just two months after launch, but ChatGPT is just the tip of the iceberg. Artificial intelligence has been developing in many forms for decades.

When it comes to technology in use or under consideration at financial institutions, most AI tools are focused on behind-the-scenes work.

With the notable exception of Bank of America’s Erica — an AI-powered virtual assistant launched in 2018 that helps customers find banking information via voice and text — financial institutions’ new tools are not personalized but can make customer service faster and more efficient, detect malware reliably and prequalify customers for loans, among other tasks.

While the possibilities for AI seem endless, banks and credit unions have to balance that sense of adventure with the weighty responsibility of keeping their customers’ sensitive financial and personal information secure.

Separating wheat from chaff

Fairfax-based Apple Federal Credit Union is a member of Curql Collective, a capital fund through which credit unions invest in fintech companies developing AI tools, says the credit union’s chief information officer, John Wyatt. Photo by Will Schermerhorn

Fairfax-based Apple Federal Credit Union, which had more than $4.3 billion in assets and 242,473 members at the end of 2023, is among the top 10 largest credit unions based in Virginia, and it’s also an early AI adopter among credit unions, with several applications currently in place and others in the wings.

John Wyatt, the credit union’s chief information officer, says Apple FCU uses a tool called Zest AI that provides more information on loan seekers than the traditional FICO credit scoring model. It opens doors to borrowers who may have previously had a difficult time getting approved for a loan through no fault of their own.

“We’re looking for … that hidden prime borrower that may not have the credit history that you would need to have a high FICO score,” he says. “What we’re trying to do is qualify more members for loans.”

Another product, CrowdStrike Falcon, helps the credit union examine behavioral indicators to bolster cybersecurity. “It can detect, isolate and respond to threats in real time,” Wyatt explains, as opposed to traditional malware-detection programs, which can take up to three or four months to detect a pattern. By that time, bad actors could have done their damage and moved on to new targets.

Apple FCU is a member of the Curql Collective, a technology capital fund that connects fintech companies creating AI-powered tools with credit unions for investment. In turn, Apple and other members decide which new tools would be appropriate for their organizations. In the past year, with more AI-driven products and entrepreneurs available, Curql (pronounced “circle”) provides a helpful filter for what’s worthwhile and what isn’t, Wyatt says.

“We get first look at vendors that have products that meet the needs of credit unions, and we go to conferences where they actually bring people in to talk about their products. We evaluate them, and we can vote on them and … fund them or not,” he says. “You kind of see what’s coming down the pipeline.”

Wyatt also attended a December 2023 AI innovation conference at which some of the bigger players like Microsoft and Midjourney rolled out new tools and updates. “Things are changing every three, four days,” Wyatt says. “You kind of have to stay ahead of it, and the hype around it is way beyond the peak of inflated expectations.”

In Virginia Beach, Chartway Federal Credit Union has two AI-powered projects underway that are set to go live in March, says Rob Keatts, Chartway’s executive vice president and chief strategy officer. One is Experian’s custom credit score program powered by AI. The other is a customer- facing telephone banking system that will use a “conversational AI bot” that will allow customers to “call in and just check your balance or move money between your own accounts,” Keatts explains. “And for whatever reason, it is extremely popular with people.”

Interestingly, the demographic breakdown of phone banking shows it is most popular among Chartway’s members over age 50 and its youngest members, in their 20s, Keatts notes. Gen Xers and millennials tend to prefer mobile banking, according to statistics pulled by Chartway’s analytics consultants in late 2022.

Last year, Chartway started Chartway Ventures, a credit union-backed venture capital fund to invest in fintech startups, similar to Curql. It helps Keatts learn more about what tools are under development, as well as what is worth investing in — since part of a credit union’s charter is managing its customers’ money responsibly.

“Being a member-owned cooperative credit union, it’s truly our members’ money,” Keatts says. “We really do look to see [if] we’re going to put out x amount for this product, what are we getting back? And then, from a cybersecurity standpoint, everything goes through our standard security checks before we go live. We do a deep dive into the background of the organization we’re partnering with. We don’t put any sensitive information into a public large-language model like a ChatGPT.”

Keatts also learns about new tools and gets recommendations through relationships with other credit unions and attending events where fintech startups present their products.

Both Wyatt and Keatts note that the size of their financial institutions — sitting in the top 10 largest credit unions based in Virginia — allows them to invest in and explore AI tools more easily than smaller credit unions or smaller banks can.

Big banks, bigger investments

On the leading edge of AI use in the finance sector, however, are the nation’s largest banks, among them McLean-based Capital One Financial and Bank of America. Unlike Apple and Chartway, these banking giants build their own tech in-house in addition to collaborating with third parties.

Capital One’s mobile app and fraud detection tools, among other products, use AI and machine learning, and the bank has created a framework to “manage and mitigate risks associated with AI,” its chief scientist and head of enterprise AI, Prem Natarajan, said during congressional testimony in November 2023. “We have a wide range of tools for managing risk relating to AI, including model risk management, credit risk, strategic risk, third-party risk, data management [and] compliance risk.”

Beyond tools in use by the bank, Capital One has invested in broader educational efforts, including its internal Tech College, which provides training to its employees on machine learning-based systems and products.

Meanwhile, as of July 2023 more than 38 million Bank of America customers engaged with virtual assistant Erica to manage their bank accounts through 1.5 billion interactions — making requests like “Replace my credit card,” or “What’s my FICO score?”

In Virginia, 27% of Bank of America clients used Erica as of October 2023, up from 22% a year earlier. In September 2023, using the same proprietary AI and machine- learning capabilities as Erica, Bank of America launched an AI chat function for corporate and commercial clients to manage their finances on its CashPro platform.

Like the credit unions, Bank of America is focused on security of its customers’ sensitive information when developing new products.

“Our approach has never been to … chase the shiny objects,” says Nikki Katz, Bank of America’s Los Angeles-based head of digital. “We’re always looking at it from the perspective of, ‘How does this translate to client benefit?’”

Digital banking is in more demand than ever, as during the pandemic, many banks and credit unions customers moved to online, phone or mobile banking options.

Tom Durkin, global product head of CashPro in Global Transaction Services at Bank of America, says expectations for digital banking “really hit the business community a lot harder, as they had to adapt and leverage some of these capabilities. I think those things factored into expectations … coming off the pandemic, in terms of accessibility to information and the ability to get access.”

Although Bank of America was the first bank to launch a virtual assistant, back in 2018 — an eon ago in technological terms — innovations related to Erica are still going forward even as customer usage increases, Katz notes.

“There’s definitely climbing interest in this space, and we’re continuing to see new applications, whether it’s helping our clients stay on top of their cash flow or changes to their recurring charges,” she says. “As there’s more investment in the space, we’re going to examine opportunities to evolve and improve that client experience and our associate experience with it.”


Business smarts

In Assistant Professor Michael Albert’s MBA data science class at the University of Virginia’s Darden School of Business, students analyze historical usage data for a bike-sharing service to determine a bicycle maintenance schedule. In the past, they would tackle this simulation by writing code in the Python computer programming language — a pain point for many since their interest lies in business, not computer science.

But now that generative AI can produce syntactically correct code based on a plain- language prompt, the assignment has transformed from a technical coding problem into an exercise in articulating analytical goals. The assignment now, Albert says, is more about developing critical thinking skills.

“They’re not engineers or mathematicians; what our students care about is effective decision-making — being able to look at a complex …  multidimensional problem and come away with the insights that will allow them to make the right decisions,” he says. “I view ChatGPT as a way to accelerate our students’ ability to make those decisions.”

Since the public debut of OpenAI’s ChatGPT generative artificial intelligence platform in November 2022, business leaders have been avidly exploring the benefits and potentials of generative AI. And business schools have accordingly come to see the need to integrate AI training into their curricula to ensure graduates are prepared to succeed in fast-changing workplaces where generative AI may be here to stay.

“I see that generative AI is transforming work across the board, and I think that it has the ability to increase the efficiency and productivity of workers,” says Paul Brooks, department chair and professor in information systems at the Virginia Commonwealth University School of Business. “In order for our students to be competitive in the workforce, they’re going to have to know how to use this technology.”

Brooks’ analytics students use AI for assignments such as analyzing historical data to figure out how many orders a vendor should prepare, a process that requires critically assessing AI’s output and refining prompts to improve the result. Such tasks are going to become ubiquitous as these tools evolve and penetrate all aspects of business operations.

“I can’t think of one corner of the world of work that will not be impacted by this,” says Phillip Wagner, a clinical associate professor at William & Mary’s Raymond A. Mason School of Business. “I think there’s a call for every industry and academic domain to be thinking about it. Without reservation, all business schools need to be doing this.”

Closing the AI gap

While employers around the world move quickly to embrace AI, there is a disconnect between leadership perspectives and employee actions and capabilities. An August 2023 Gallup survey found that 72% of Fortune 500 chief human resources officers anticipate AI will transform their companies’ staffing needs within the next three years. Yet 70% of employees say they never use AI tools, and more than half don’t feel capable of doing so.

Business schools need to help fill this gap. An early formal example of this is Northwestern University’s MBAi Program, an AI-focused degree program run jointly by the university’s Kellogg School of Management and McCormick School of Engineering. But in most cases, business schools are just beginning to integrate AI lessons and concepts into their curricula informally or on a case-by-case basis within the context of faculty discussions.

In October, the University of Virginia’s Darden School of Business received the largest gift in its 68-year history from alumnus David LaCross and his wife, Kathleen. Their $94 million donation, among the 10 largest gifts ever received by any business school, is aimed in part at helping Darden become a pioneer in researching and teaching about artificial intelligence.

In June, the university’s Generative AI in Teaching and Learning Task Force reported that 42% of U.Va. students currently use generative AI tools to assist with coursework. The task force recommended that schools and units within the university mandate syllabus statements around expectations on AI usage. The business school has been holding monthly seminars for faculty covering issues connected to AI, and faculty members who are interested in integrating AI into their courses can consult with instructional designers.

Kushagra Arora, Darden’s chief digital officer, emphasizes that the business school has been using other forms of AI in courses for a decade, so the advent of generative AI need not change anything fundamentally.

“We still want to do things in ways that are ethical and current, and to manage risk and security,” he says. “Almost every business unit at Darden at this point is using AI in some way or another.”

Evolving responses

Generative AI’s “potential is quite exciting,” says Karen Conner, director of academic innovation at William & Mary’s Mason School of Business. “I think we’re living in great times.” Photo by Mark Rhodes

Like at U.Va., other Virginia business schools are proceeding with plans for AI with much discussion, information-sharing and education but with relatively little emphasis thus far on formal policies governing its use.

Creating blanket policies for AI usage is fraught because it can impinge on academic freedom and faculty autonomy, some experts say, and such policies can also become outdated quickly as technology changes at lightning speed. As a result, most schools are focusing on providing information and guidance on how faculty can integrate AI while maintaining intellectual rigor, ethical standards and academic integrity.

“You don’t want to make a firm response and then have that become immediately outdated,” says Benjamin Selznick, associate professor and adviser of postsecondary analysis and leadership at James Madison University’s College of Business. “You want a dynamic, flexible context that’s going to evolve as the technology itself rapidly evolves.”

Meanwhile, some schools are proactively ensuring that faculty are well-versed on generative AI and equipped with needed resources.

William & Mary’s Mason School of Business has created a task force and a community of practice around generative AI. The school’s Academic Innovation team is publishing information about AI in its newsletter and is teaming up with the school’s McLeod Business Library and Center for Online Learning to hold virtual office hours for faculty on the topic. The team also created an online AI toolkit for faculty and presents updates on generative AI at every faculty meeting.

The overriding message is that faculty should play a key role in helping students learn to make the most thoughtful and effective use of AI tools.

“We need to expose our students to this technology so they can understand its limitations, its biases and the knowledge to critique what they have received as an output,” says Karen Conner, director of academic innovation at the Mason School.

Part of the AI-focused conversation at business schools revolves around how to maintain rigor, assessment standards and honor codes. It’s important for MBA programs to confidently certify that their graduates possess requisite skills, but assessing that can require creativity in a context where there are no effective filters to identify AI-generated output. 

“Everyone’s handling it in their own way,” says Selznick at JMU. “It’s important to acknowledge that effectively collaborating with AI is going to be a valuable skill, but I’m also hearing about instructors who are saying, ‘I’m going back to pen-and-paper final exams because that’s the only way I’m going to know’” students aren’t cheating by using AI to complete assignments.

VCU’s Brooks says some faculty are turning to oral exams or asking students to report on their work in ways AI never could, such as by completing a task and then explaining their thought processes.

“It’s been very much disruptive to the way we’ve been doing things,” he says. “It’s caused us to rethink how we deliver the materials.”

Wagner, at William & Mary, believes that kind of rethinking can be good for faculty and their students. While he recognizes that some classes may have a stronger need for verifiable assessment than others, he encourages professors to reconsider how to appraise students’ learning.

“Instead of dwelling in the land of anxiety, I think it’s an invitation,” he says. “If your courses are ones where your students could plug your homework prompts into a machine and get an output, maybe it’s not necessarily the students’ problem alone. It’s that your teaching method needs some refreshing.”

Generating conversation

For all kinds of educators, grappling with AI is overall a question of how to combine the possibility and disruption of emerging technology with the age-old tradition of cultivating critical thinking skills.

“Essentially, for me, this goes back to liberal arts education,” says Kenneth Kahn, professor and dean at Old Dominion University’s Strome College of Business, where faculty are focused on helping students understand the potential pitfalls of AI, such as the technology’s tendency toward hallucinations and biases, as well as exploring promising ways that AI can enhance productivity. “You have to have that critical thinking to evaluate what the output is, to decide whether or not it’s important. That’s where our MBA students and our business schools need to go.”

For some MBA faculty, using AI to further a liberal arts education means using the tools to encourage open conversation and robust self-reflection. Tracy Johnson-Hall, a clinical associate professor at William & Mary’s business school, requires her MBA students to explain how they used generative AI on each assignment.

“First and foremost, by encouraging open discussion of the use of generative AI, I want to reduce the perception of any prohibition around it and instead focus on open conversations about where it is useful and how best to leverage it for productivity,” she says. “Asking them to explain how they use it generates conversation.”

William & Mary’s Wagner began requiring his MBA students to use generative AI in the fall 2023 semester, both to teach best practices for its use, but also to offer them opportunities to reflect on the process and on themselves.

For a course called Diversity in the Workplace, he has students conduct a dialogue of at least 30 messages with a generative AI tool on the subject of “a diversity hang-up,” such as friction between diverse beliefs and religious convictions. The process allows students to learn “to ask better questions and to be questioned, which makes us better, more well-rounded critical thinkers,” he says.

For William & Mary MBA student Skander Lakhal, who spent 10 years in the oil and gas exploration industry before returning to school, such facilitation of critical thought is the most promising aspect of generative AI. He uses AI to help with research, summarize long articles, process recordings and slides from lectures, prepare for exams and generate examples of difficult concepts from his classes.

“Through these experiences, I’ve learned that AI is more than just a tool for efficiency,” he says. “It’s a catalyst for deeper understanding and innovative problem-solving.”

Great possibilities lie in generative AI tools that enable intellectual exploration and new ways of thinking and working, say business school professors who are approaching the tools with an attitude of curiosity and enthusiasm.

“The potential is quite exciting, I think,” says Conner of William & Mary. “I think we’re living in great times.” 


McLean tech firm Pangiam to sell to BigBear.ai for $70M

Columbia, Maryland-based BigBear.ai Holdings is acquiring Pangiam Intermediate Holdings, a McLean-based facial recognition and biometrics solutions provider for the trade, travel and digital identification industries, in a $70 million, all-stock deal, BigBear announced Monday.

The move will combine Pangiam’s technologies with BigBear.ai’s computer vision capabilities and allow BigBear.ai to expand its customer base and offerings to airlines, airports and identity-verification companies, as well as within the U.S. Department of Homeland Security. The deal is expected to close in the first quarter of 2024 and is subject to regulatory approval.

BigBear.ai has more than 20 federal defense and intelligence customers and 160 commercial customers.

“Vision AI [artificial intelligence] has long been considered the holy grail of applied AI because of its potential to perceive and interact with the world in a human way,” BigBear.ai CEO Mandy Long said in a statement. “BigBear.ai’s acquisition of Pangiam will create a full-vision AI portfolio — among the first in the industry — leveraging near-field vision AI in support of localized environments and far-field vision AI in support of global-scale environments.”

Pangiam was created in 2020 by Boca Raton, Florida-based private equity firm AE Industrial Partners through the acquisitions and combination of Alexandria-based software company Linkware and Pangiam’s predecessor company, PRE. In 2021, Pangiam purchased veriScan, an integrated biometric facial recognition system for airports and airlines, from the Metropolitan Washington Airports Authority.

Gov. Glenn Youngkin announced in September 2022 that Pangiam would invest $3.1 million to expand its Fairfax County office and establish its global headquarters there, creating 201 jobs over three years. It was not immediately clear on Tuesday if the company would move as a result of the acquisition or if staff changes or cuts will be made.

“The combination of Pangiam and BigBear.ai will position our combined companies to vault solutions currently available in market,” Pangiam CEO Kevin McAleenan said. “With our shared mission and a complementary customer base and product set, our teams will be able to pursue larger customer opportunities, enhance our technology development and accelerate our growth. We’re thrilled to soon join the BigBear.ai team.”

 

Fairfax AI firm acquired by Texas-based tech company

Fairfax-based ARInspect, a firm specializing in artificial intelligence products for public sector field operations, has been acquired by Texas software company Tyler Technologies, the companies announced Tuesday.

Tyler declined to disclose the price and terms of the deal.

Tyler will add ARInspect’s AI-powered platform to its portfolio and use the Fairfax company’s technology across its verticals with a focus on all regulated entities, including environmental protection, disaster recovery and human services, according to a news release. ARInspect’s platform allows public sector employees to work independently to manage all activities in the field. The platform analyzes historical data, completed inspections, violations, integrated census data and more, and helps agencies identify sites, assets and facilities that may be at risk.

“Over the last few years, we have seen a great demand for public sector edge technology with the power of AI and automation,” Vivek Mehta, founder and CEO of ARInspect, said in a statement. “We couldn’t be more excited to combine our expertise with Tyler’s to provide a powerful and user-friendly field operations platform. Our similar values and commitment make this the ideal partnership for all ARInspect and Tyler clients.”

The 40 employees of ARInspect will join Tyler’s platform solutions division, and the management team is expected to be a key part of the division, a Tyler spokesperson told Virginia Business. Tyler also has offices in Herndon, with 178 employees; Arlington County, where 48 employees work; and Richmond, with 30 workers. ARInspect’s employees will move to Tyler’s Herndon office.

“Tyler understands the challenges that government agencies have in providing resources to field workers, including access to smart capture tools, real-time data, and the decision-making capabilities that can impact effectiveness,” Brian Combs, president of Tyler’s platform solutions division, said in a statement. “ARInspect’s platform and expertise in AI and machine learning combined with Tyler’s public sector experience and robust portfolio will help deliver on our promise to create smarter, safer and stronger communities for our clients.”

It’s alive — with possibilities

Just a word of friendly warning: Our November cover story is one of the strangest tales ever told. I think it will thrill you. It may shock you. It might even horrify you. So, if any of you feel you do not wish to subject your nerves to such a strain, now’s your chance to … well, we warned you.

With that tongue-in-cheek nod to the introduction from Universal Pictures’ 1931 horror classic “Frankenstein” behind us, I confess that I’m writing these words in October, smack in the middle of that autumn month filled with ghosts and goblins and things that go bump in the night. And it’s appropriate to reference “Frankenstein,” given that this month’s cover story by Virginia Business Associate Editor Katherine Schulte is concerned with humanity’s quest to artificially replicate intelligence and how the business community hopes to harness that lightning-fast technology for increased productivity and profits — topics that can induce feelings ranging from excitement to dread.

Two of the most common refrains I’ve heard about artificial intelligence this year are these: “You may not lose your job to AI, but you will lose your job to someone who knows how to use it,” and “The opportunity outweighs the fear.”

To be sure, from the moment OpenAI unveiled its ChatGPT generative AI platform to the public one year ago, there have been strong scents in the air of both fear and money.

ChatGPT has passed the nation’s standardized bar exam, scoring better than 90% of lawyers who took the test. It’s been used to diagnose illnesses, research legal precedents and write everything from e-books and marketing emails to Excel formulas and computer code.

Personally, I’ve used it to draft business letters and marketing materials. I find its efforts can generally be too effusive, but even requiring a little tweaking, it admittedly has saved me some time. Similarly, I’ve tasked ChatGPT with organizing large groups of data into spreadsheets. For those chores, the results have been a bit more uneven. ChatGPT can spit out a spreadsheet in a couple minutes or less, but it’s kind of like having a speedy college intern who requires some hand-holding and may be prone to mistakes. Sometimes, in its eagerness to please, ChatGPT will invent missing data without understanding that’s not helpful or appropriate. Other times, it may place data in the wrong rows or columns. However, even with correcting ChatGPT’s work, a job that might have taken me two or three hours on my own only took about 45 minutes to an hour to complete.

And while Virginia Business isn’t using AI to write news stories — sorry to disappoint, but this column was written by a ho-hum human — you may have guessed that the striking art adorning our cover and illustrating its accompanying feature story this month were generated using artificial intelligence.

The past year has seen dramatic improvements in AI art tools such as Midjourney and Adobe Firefly, which have learned from a huge body of existing images (mostly by human artists) to generate new artwork. With Adobe’s latest updates, a minimally skilled user like myself can generate startlingly creative works. In Photoshop, I can take a pastoral farm photo and instantly replace a barn with photorealistic cows just by typing in those words; it will appear as if the barn had never been there. That’s fantastic if I’m creating generic illustrations, but that might be problematic if I’m a real estate agent who’s marketing a specific property and decides to spiff it up to look better than reality. Because we humans are operating this tech, it is as rife with possibilities for productivity as it is for misuse. As Schulte reports in her story, Virginia companies from accounting firms to health care systems and law firms are exploring not only real-world applications for generative AI, but also how to install virtual guardrails around it.

Like Dr. Frankenstein, the geniuses who are spawning today’s AI tools are hardly pausing to consider the ramifications before sending their creations shambling into the world. And like Frankenstein’s lightning-birthed monster, generative AI’s existence presents a host of ethical questions that are fast following behind it.

The next frontier

You come down with coldlike symptoms. Flu season is here, and a new COVID subvariant is circulating. As the illness lingers, you question whether you should see a doctor.

Imagine putting your symptoms into a chatbot connected to your doctor’s office or health system that can retrieve your medical records, evaluate your information and recommend next steps.

“It could make recommendations on … should you be seen by one of our providers in the emergency room? Should you have a virtual visit with a provider? Should you have just a conversation with a triage nurse? Or do you need to schedule an appointment with a provider?” says Dr. Steve Morgan, senior vice president and chief medical information officer at Roanoke-based health system Carilion Clinic.

Such a scenario isn’t science fiction — it exists now, through artificial intelligence-powered tools like Microsoft’s Azure Health Bot. 

“Although we don’t have it now, we’re building the infrastructure to be able to employ that type of technology,” Morgan says. Carilion has already embraced other AI software, like a dictation system for medical notes.

One year after ChatGPT came on the scene, redefining expectations for AI capabilities, industries have already begun adopting AI chatbots in varying forms, including creating their own models. In this Wild West of rapidly developing tech, companies’ workforce training methods range widely, from encouraging employee exploration to structuring rollouts.

Generative AI tools like ChatGPT — AI platforms used to synthesize new data, rather than just analyze data as AI has been traditionally designed to do — are built on large language models (LLMs) that are essentially “glorified sentence completion tools,” says Naren Ramakrishnan, the Virginia Tech Thomas L. Phillips Professor of Engineering and director of Tech’s Sanghani Center for Artificial Intelligence and Data Analytics.

“They sound so realistic and so compelling because they have been trained or learning on a ridiculous amount of data,” enabling the AI engines to learn which words make sense in context, he explains.

OpenAI’s ChatGPT became a household word shortly after OpenAI released a demo of the conversational AI platform on Nov. 30, 2022. ChatGPT is capable of performing many of the same tasks as human knowledge workers — ranging from drafting emails, business letters, reports and marketing materials to performing paralegal duties, writing computer code, putting data into spreadsheets and analyzing large amounts of data — and it can produce finished work in as little as one second to a few minutes, depending on length and complexity. In March, OpenAI released an updated model, ChatGPT-4, available to subscribers. GPT-4 scored better than 90% of human test-takers on the Uniform Bar Exam, the standardized bar exam for U.S. attorneys.

Generative AI has garnered huge investments. Microsoft has reportedly invested $13 billion in OpenAI since 2019, and Amazon announced in September that it would invest up to $4 billion in Anthropic, an OpenAI rival that has also received $300 million in funding from Google.

In a survey of 1,325 CEOs released in early October by KPMG, 72% of U.S. CEOs deemed generative AI as “a top investment priority,” and 62% expect to see a return on their investment in the tech within three to five years.

Generative AI is developing at a blistering pace. On Sept. 25, OpenAI released a version of ChatGPT that can listen and speak aloud. It’s also able to respond to images.

AI is already changing the work landscape, says Sharon Nelson, president of Fairfax-based cybersecurity and IT firm Sensei Enterprises. “It’s a bolt of lightning. … We’re seeing it go at the speed of light, and I can only imagine that it will go faster still.”

McGuireWoods is providing training on AI basics, ethical use and prompt engineering, says Peter Geovanes, the firm’s chief innovation and AI officer. Photo courtesy McGuireWoods/AI background illustration by Virginia Business staff

Power players

As the tech has quickly progressed, large Virginia companies have formally adopted AI tools and are creating standard AI training policies and processes for their employees.

Reston-based Fortune 500 tech contractor Leidos is providing varying levels of training for employees based on their needs, ranging from those who need to build awareness of AI to subject matter experts. Leidos builds curricula with a mix of external courses
from suppliers like Coursera and in-house content,
says deputy chief technology officer, Doug Jones.

Like many companies, Leidos is creating an internal AI chatbot, although the company also plans to offer it to customers. The chatbot will focus on IT and software questions, allowing workers to search for answers specific to the firm.

Businesses with troves of documents can easily adapt an LLM to be specific to their documents and processes, Ramakrishnan says: “I’m noticing everybody wants to create their own LLM that’s specific to them that they can control. Because they certainly do not want to send their data out to OpenAI.” Because ChatGPT learns from its interactions with humans, information entered into the tool could be shared with another user.

Businesses are also taking advantage of generative AI tools built specifically for their industries.

Virginia’s largest law firm, Richmond-based McGuireWoods, is beginning to use CoCounsel, an AI tool designed for attorneys and built on GPT-4 that should allow attorneys to enter client data securely in the near future. Thomson Reuters acquired CoCounsel’s developer, Casetext, in April for $650 million in cash.

CoCounsel has a range of uses, like drafting a discovery response or helping an attorney brainstorm questions for a specific deposition. An attorney preparing to depose an expert witness could feed the tool the expert’s published papers and ask it to summarize them or ask it whether the expert has ever taken a position on a particular subject in them, explains McGuireWoods Managing Partner Tracy Walker.

A widening reach

ChatGPT isn’t always a reliable source, as it sometimes can fabricate detailed answers, a phenomenon referred to as “hallucinations.” One attention-grabbing misuse of ChatGPT that demonstrated this problem occurred when lawyers representing a client in a personal injury case against Avianca Airlines cited six fabricated cases as legal precedent, based on research using ChatGPT. A federal judge fined the firm — Levidow, Levidow & Oberman — and two lawyers $5,000 apiece.

Walker stresses that responsible attorneys will look up and read cases cited by an AI chatbot, but CoCounsel also provides a safeguard, says Peter Geovanes, McGuireWoods’ chief innovation and AI officer: It’s been instructed not to provide an answer if it does not know it.

McGuireWoods is taking a two-phased approach to CoCounsel’s rollout. The first phase, which started in September and is running through the end of the year, is a pilot program with about 40 attorneys. While Casetext completes its security review of CoCounsel, McGuireWoods’ pilot group is using only public data to test hypothetical uses of the tool. Starting in early 2024, McGuireWoods’ phase two testing will likely expand to about 100 attorneys.

In the meantime, Geovanes is leading foundational training about generative AI. The firm’s first brown bag webinar session was set for Oct. 17. Although the curriculum is designed for attorneys, recordings will be available for any interested employee. McGuireWoods also plans to offer outside courses about the responsible and ethical use of generative AI.

For attorneys selected for the pilot program, the firm will also offer specialized training from Casetext on “prompt engineering” — how to phrase questions to the chatbot to get the desired responses.

In Roanoke and the New River Valley, Carilion is preparing to pilot a new layer of an existing AI-powered transcription tool built for clinicians. The system has used Nuance’s Dragon Medical One, which transcribes clinicians’ notes as they speak, for “a number of years,” Morgan says.

Microsoft purchased Nuance for $19.7 billion in March 2022. In March 2023, Nuance launched Dragon Ambient eXperience (DAX) Express (now DAX Copilot), which is based on GPT-4. It listens to a clinician-patient conversation and drafts clinical notes seconds after the appointment. Morgan hopes to begin piloting DAX in the first quarter of 2024. Because they’ve used Dragon, practitioners likely won’t need much training to adjust to DAX, he says.

Additionally, Carilion is participating in a pilot test of an AI component in the MyChart patient portal offered by Carilion’s electronic medical records vendor, Epic. The AI tool is designed to draft responses to patient questions sent through the portal, taking into account a patient’s medications and medical history. Six Carilion practitioners are participating in the pilot, which started in September, receiving on-the-fly training from Epic and providing feedback.

Examining new terrain

Smaller Virginia companies with fewer resources seem to have taken a more cowboy approach to the new AI frontier, setting ground rules before encouraging employees to explore generative AI tools on their own.

Will Melton, president and CEO of Richmond-based digital marketing agency Xponent21, is also leading a regional work group focused on preparing Richmond’s workforce for AI. Xponent21 initially used Jasper, an AI software tool for writing and marketing, but the firm now uses ChatGPT for tasks like information analysis and developing initial copy, which then goes through human editors.

“I think that the biggest thing that these tools give us is freeing up time that is … spent on monotonous activities that don’t have a lot of value,” like helping employees spend less time writing social media posts or blogs and more time speaking with clients, he says.

Ben Madden, board president for the Northern Virginia Society for Human Resource Management, has begun using ChatGPT in his HR consulting work, asking the AI tool to draft job descriptions and synthesize information for presentations and policy documents.

“Having it be able to do tasks that may take longer without having the support of supercomputers behind it is where I continue to probably see it being used and being able to make my life easier as either a business owner or even for my clients,” says Madden, whose one-person consultancy, HR Action, is based in Arlington County.

Another Richmond-based business beginning to adopt AI is accounting firm WellsColeman, which updated its internet acceptable use policy to include guardrails for AI and ChatGPT usage, like prohibiting employees from entering client data into the platform.

Nevertheless, the firm has encouraged its employees to get familiar with ChatGPT, says Managing Partner George Forsythe. In full firm meetings, leadership will sometimes demonstrate how they’ve recently used ChatGPT, and staff can ask questions or discuss possible uses.

“We’re using [ChatGPT] as an initial step in gaining familiarity with areas that are not part of our everyday expertise. It’s an easy way to get a broad brush on any topic area,” says Forsythe. After verifying the information given, staff can use it as a starting point for their research.

Forsythe has consulted ChatGPT with general management questions like how to communicate with an employee having leadership challenges and has also used it as a marketing aid.

“When it comes to selling our services, I’ve asked it to put together a proposal and make it intriguing and have a hook,” Forsythe says, and he’s been pleased with the results.

Similarly, Winchester-based accounting firm YHB is using generative AI tools for marketing questions that aren’t firm-specific.

“Our team uses [ChatGPT] a ton to help understand and interpret tax laws and information like that,” says Jeremy Shen, YHB’s chief marketing officer. They’ll also ask the chatbot if a website post will have a high search engine optimization score.

The firm is working on selecting an AI tool to formally implement, whether ChatGPT Enterprise, Microsoft’s Copilot or another. For now, “we just kind of said, ‘We know you’re using it. We know people are using it. Here’s some guardrails … but discover and let us know if you come up with something useful,’” Shen says.

Carilion Clinic is participating in a pilot for an AI feature being tested by the health system’s electronic medical records vendor, says Dr. Steve Morgan, Carilion’s senior vice president and chief medical information officer. Photo by Don Petersen/AI background illustration by Virginia Business staff

The new steam engine?

Out of 31,000 people surveyed across 31 countries, 49% are worried that AI will replace their jobs, according to a Microsoft survey released in May. That same month, a CNBC/SurveyMonkey poll found that 24% of almost 9,000 U.S. workers surveyed are worried that AI will make their jobs obsolete.

It’s not an unfounded fear. In 10 years, AI automation could replace about 300 million full-time jobs, according to a March report from Goldman Sachs researchers, but it could also raise the global GDP by 7%, or nearly $7 trillion. In May, IBM CEO Arvind Krishna said AI could replace up to 7,800 jobs — 30% of the company’s back-office workers — over five years.

A refrain commonly heard among AI’s proponents is, “AI won’t take your job, but someone who knows how to use AI will.” It’s paraphrased from a statement made by economist Richard Baldwin, a professor at the International Institute for Management Development, during the 2023 World Economic Forum’s Growth Summit.

“I see some paralegals perhaps being replaced by AI, and only some, because there are some paralegals that have other advanced skills as well,” says Nelson with Sensei Enterprises, who is also an attorney and former president of the Virginia State Bar. Lawyers who do simpler tasks like drafting wills or divorce contracts might be vulnerable to being supplanted by AI, too, she says.

Comparisons to prior technological advances abound. “When the world switched from horse-drawn transport to motor vehicles, jobs for stablehands disappeared, but jobs for auto mechanics took their place,” Federal Reserve Board of Governors member Lisa D. Cook said in a September speech at a National Bureau of Economic Research conference. Workers’ adaptability will depend on their “portfolio of skills,” she said.

Supporters say AI will make employees more productive, which can help industries weather labor shortages and let workers put aside rote tasks to focus on higher-level work, which could increase their job satisfaction.

In the world of government contracting, the constraints on some workers, like getting security clearances and working in-person in a classified environment, can make hiring difficult, says Leidos’ Jones.

“We actually find sometimes we can take some of the tasks that are not as engaging for our own employees [like data entry] … off their plate, and they can spend more time doing the things that are really powerful and unique to humans,” he says.

Forsythe also sees AI as an aid to staff: “Right now, the war is for talent. … If we can’t find more people, one of the things we can do is try to change their roles … and support them in manners that make their jobs easier, not so that way they’ll do more work, but so that way they remain part of the firm and don’t feel overburdened,” he says.

Or it could just improve workers’ quality of life. In an early October interview with Bloomberg Television, JPMorgan Chase CEO Jamie Dimon predicted that time savings from AI could result in a universal 3.5-day workweek — though he also said that he anticipates that AI will result in lost jobs.

While AI will eliminate jobs, it will also create them, experts say. The Washington, D.C., region had almost 1,800 listings for AI-related jobs at the end of August, according to Jones Lang LaSalle. Leidos and Boeing were among the companies with the most openings for AI positions.

New roles are emerging, like “prompt engineers” who develop and refine prompts or queries for AI tools to get the most valuable and appropriate responses. At the end of September, OpenAI rival Anthropic was seeking a “prompt engineer and librarian” hybrid position in San Francisco with a salary range of $250,000 to $375,000.

“The people who study the future of work, they say that certain jobs will go away,” Ramakrishnan says, “… but then there will probably be new jobs created that we don’t know yet.