Please ensure Javascript is enabled for purposes of website accessibility

Slowly bot surely

From health care to real estate and law, artificial intelligence is becoming an increasingly bigger part of many industries, with executives rolling out new tools and updating policies. Like other businesses, banks and credit unions too have been exploring this electronic frontier, although they’re pairing technological progress with caution.

Even if you’re new to the topic, you probably have heard of ChatGPT, the trailblazing generative AI chatbot launched by OpenAI in November 2022. It was a big deal, gathering more than 100 million monthly users just two months after launch, but ChatGPT is just the tip of the iceberg. Artificial intelligence has been developing in many forms for decades.

When it comes to technology in use or under consideration at financial institutions, most AI tools are focused on behind-the-scenes work.

With the notable exception of Bank of America’s Erica — an AI-powered virtual assistant launched in 2018 that helps customers find banking information via voice and text — financial institutions’ new tools are not personalized but can make customer service faster and more efficient, detect malware reliably and prequalify customers for loans, among other tasks.

While the possibilities for AI seem endless, banks and credit unions have to balance that sense of adventure with the weighty responsibility of keeping their customers’ sensitive financial and personal information secure.

Separating wheat from chaff

Fairfax-based Apple Federal Credit Union is a member of Curql Collective, a capital fund through which credit unions invest in fintech companies developing AI tools, says the credit union’s chief information officer, John Wyatt. Photo by Will Schermerhorn

Fairfax-based Apple Federal Credit Union, which had more than $4.3 billion in assets and 242,473 members at the end of 2023, is among the top 10 largest credit unions based in Virginia, and it’s also an early AI adopter among credit unions, with several applications currently in place and others in the wings.

John Wyatt, the credit union’s chief information officer, says Apple FCU uses a tool called Zest AI that provides more information on loan seekers than the traditional FICO credit scoring model. It opens doors to borrowers who may have previously had a difficult time getting approved for a loan through no fault of their own.

“We’re looking for … that hidden prime borrower that may not have the credit history that you would need to have a high FICO score,” he says. “What we’re trying to do is qualify more members for loans.”

Another product, CrowdStrike Falcon, helps the credit union examine behavioral indicators to bolster cybersecurity. “It can detect, isolate and respond to threats in real time,” Wyatt explains, as opposed to traditional malware-detection programs, which can take up to three or four months to detect a pattern. By that time, bad actors could have done their damage and moved on to new targets.

Apple FCU is a member of the Curql Collective, a technology capital fund that connects fintech companies creating AI-powered tools with credit unions for investment. In turn, Apple and other members decide which new tools would be appropriate for their organizations. In the past year, with more AI-driven products and entrepreneurs available, Curql (pronounced “circle”) provides a helpful filter for what’s worthwhile and what isn’t, Wyatt says.

“We get first look at vendors that have products that meet the needs of credit unions, and we go to conferences where they actually bring people in to talk about their products. We evaluate them, and we can vote on them and … fund them or not,” he says. “You kind of see what’s coming down the pipeline.”

Wyatt also attended a December 2023 AI innovation conference at which some of the bigger players like Microsoft and Midjourney rolled out new tools and updates. “Things are changing every three, four days,” Wyatt says. “You kind of have to stay ahead of it, and the hype around it is way beyond the peak of inflated expectations.”

In Virginia Beach, Chartway Federal Credit Union has two AI-powered projects underway that are set to go live in March, says Rob Keatts, Chartway’s executive vice president and chief strategy officer. One is Experian’s custom credit score program powered by AI. The other is a customer- facing telephone banking system that will use a “conversational AI bot” that will allow customers to “call in and just check your balance or move money between your own accounts,” Keatts explains. “And for whatever reason, it is extremely popular with people.”

Interestingly, the demographic breakdown of phone banking shows it is most popular among Chartway’s members over age 50 and its youngest members, in their 20s, Keatts notes. Gen Xers and millennials tend to prefer mobile banking, according to statistics pulled by Chartway’s analytics consultants in late 2022.

Last year, Chartway started Chartway Ventures, a credit union-backed venture capital fund to invest in fintech startups, similar to Curql. It helps Keatts learn more about what tools are under development, as well as what is worth investing in — since part of a credit union’s charter is managing its customers’ money responsibly.

“Being a member-owned cooperative credit union, it’s truly our members’ money,” Keatts says. “We really do look to see [if] we’re going to put out x amount for this product, what are we getting back? And then, from a cybersecurity standpoint, everything goes through our standard security checks before we go live. We do a deep dive into the background of the organization we’re partnering with. We don’t put any sensitive information into a public large-language model like a ChatGPT.”

Keatts also learns about new tools and gets recommendations through relationships with other credit unions and attending events where fintech startups present their products.

Both Wyatt and Keatts note that the size of their financial institutions — sitting in the top 10 largest credit unions based in Virginia — allows them to invest in and explore AI tools more easily than smaller credit unions or smaller banks can.

Big banks, bigger investments

On the leading edge of AI use in the finance sector, however, are the nation’s largest banks, among them McLean-based Capital One Financial and Bank of America. Unlike Apple and Chartway, these banking giants build their own tech in-house in addition to collaborating with third parties.

Capital One’s mobile app and fraud detection tools, among other products, use AI and machine learning, and the bank has created a framework to “manage and mitigate risks associated with AI,” its chief scientist and head of enterprise AI, Prem Natarajan, said during congressional testimony in November 2023. “We have a wide range of tools for managing risk relating to AI, including model risk management, credit risk, strategic risk, third-party risk, data management [and] compliance risk.”

Beyond tools in use by the bank, Capital One has invested in broader educational efforts, including its internal Tech College, which provides training to its employees on machine learning-based systems and products.

Meanwhile, as of July 2023 more than 38 million Bank of America customers engaged with virtual assistant Erica to manage their bank accounts through 1.5 billion interactions — making requests like “Replace my credit card,” or “What’s my FICO score?”

In Virginia, 27% of Bank of America clients used Erica as of October 2023, up from 22% a year earlier. In September 2023, using the same proprietary AI and machine- learning capabilities as Erica, Bank of America launched an AI chat function for corporate and commercial clients to manage their finances on its CashPro platform.

Like the credit unions, Bank of America is focused on security of its customers’ sensitive information when developing new products.

“Our approach has never been to … chase the shiny objects,” says Nikki Katz, Bank of America’s Los Angeles-based head of digital. “We’re always looking at it from the perspective of, ‘How does this translate to client benefit?’”

Digital banking is in more demand than ever, as during the pandemic, many banks and credit unions customers moved to online, phone or mobile banking options.

Tom Durkin, global product head of CashPro in Global Transaction Services at Bank of America, says expectations for digital banking “really hit the business community a lot harder, as they had to adapt and leverage some of these capabilities. I think those things factored into expectations … coming off the pandemic, in terms of accessibility to information and the ability to get access.”

Although Bank of America was the first bank to launch a virtual assistant, back in 2018 — an eon ago in technological terms — innovations related to Erica are still going forward even as customer usage increases, Katz notes.

“There’s definitely climbing interest in this space, and we’re continuing to see new applications, whether it’s helping our clients stay on top of their cash flow or changes to their recurring charges,” she says. “As there’s more investment in the space, we’re going to examine opportunities to evolve and improve that client experience and our associate experience with it.”


It’s alive — with possibilities

Just a word of friendly warning: Our November cover story is one of the strangest tales ever told. I think it will thrill you. It may shock you. It might even horrify you. So, if any of you feel you do not wish to subject your nerves to such a strain, now’s your chance to … well, we warned you.

With that tongue-in-cheek nod to the introduction from Universal Pictures’ 1931 horror classic “Frankenstein” behind us, I confess that I’m writing these words in October, smack in the middle of that autumn month filled with ghosts and goblins and things that go bump in the night. And it’s appropriate to reference “Frankenstein,” given that this month’s cover story by Virginia Business Associate Editor Katherine Schulte is concerned with humanity’s quest to artificially replicate intelligence and how the business community hopes to harness that lightning-fast technology for increased productivity and profits — topics that can induce feelings ranging from excitement to dread.

Two of the most common refrains I’ve heard about artificial intelligence this year are these: “You may not lose your job to AI, but you will lose your job to someone who knows how to use it,” and “The opportunity outweighs the fear.”

To be sure, from the moment OpenAI unveiled its ChatGPT generative AI platform to the public one year ago, there have been strong scents in the air of both fear and money.

ChatGPT has passed the nation’s standardized bar exam, scoring better than 90% of lawyers who took the test. It’s been used to diagnose illnesses, research legal precedents and write everything from e-books and marketing emails to Excel formulas and computer code.

Personally, I’ve used it to draft business letters and marketing materials. I find its efforts can generally be too effusive, but even requiring a little tweaking, it admittedly has saved me some time. Similarly, I’ve tasked ChatGPT with organizing large groups of data into spreadsheets. For those chores, the results have been a bit more uneven. ChatGPT can spit out a spreadsheet in a couple minutes or less, but it’s kind of like having a speedy college intern who requires some hand-holding and may be prone to mistakes. Sometimes, in its eagerness to please, ChatGPT will invent missing data without understanding that’s not helpful or appropriate. Other times, it may place data in the wrong rows or columns. However, even with correcting ChatGPT’s work, a job that might have taken me two or three hours on my own only took about 45 minutes to an hour to complete.

And while Virginia Business isn’t using AI to write news stories — sorry to disappoint, but this column was written by a ho-hum human — you may have guessed that the striking art adorning our cover and illustrating its accompanying feature story this month were generated using artificial intelligence.

The past year has seen dramatic improvements in AI art tools such as Midjourney and Adobe Firefly, which have learned from a huge body of existing images (mostly by human artists) to generate new artwork. With Adobe’s latest updates, a minimally skilled user like myself can generate startlingly creative works. In Photoshop, I can take a pastoral farm photo and instantly replace a barn with photorealistic cows just by typing in those words; it will appear as if the barn had never been there. That’s fantastic if I’m creating generic illustrations, but that might be problematic if I’m a real estate agent who’s marketing a specific property and decides to spiff it up to look better than reality. Because we humans are operating this tech, it is as rife with possibilities for productivity as it is for misuse. As Schulte reports in her story, Virginia companies from accounting firms to health care systems and law firms are exploring not only real-world applications for generative AI, but also how to install virtual guardrails around it.

Like Dr. Frankenstein, the geniuses who are spawning today’s AI tools are hardly pausing to consider the ramifications before sending their creations shambling into the world. And like Frankenstein’s lightning-birthed monster, generative AI’s existence presents a host of ethical questions that are fast following behind it.

The next frontier

You come down with coldlike symptoms. Flu season is here, and a new COVID subvariant is circulating. As the illness lingers, you question whether you should see a doctor.

Imagine putting your symptoms into a chatbot connected to your doctor’s office or health system that can retrieve your medical records, evaluate your information and recommend next steps.

“It could make recommendations on … should you be seen by one of our providers in the emergency room? Should you have a virtual visit with a provider? Should you have just a conversation with a triage nurse? Or do you need to schedule an appointment with a provider?” says Dr. Steve Morgan, senior vice president and chief medical information officer at Roanoke-based health system Carilion Clinic.

Such a scenario isn’t science fiction — it exists now, through artificial intelligence-powered tools like Microsoft’s Azure Health Bot. 

“Although we don’t have it now, we’re building the infrastructure to be able to employ that type of technology,” Morgan says. Carilion has already embraced other AI software, like a dictation system for medical notes.

One year after ChatGPT came on the scene, redefining expectations for AI capabilities, industries have already begun adopting AI chatbots in varying forms, including creating their own models. In this Wild West of rapidly developing tech, companies’ workforce training methods range widely, from encouraging employee exploration to structuring rollouts.

Generative AI tools like ChatGPT — AI platforms used to synthesize new data, rather than just analyze data as AI has been traditionally designed to do — are built on large language models (LLMs) that are essentially “glorified sentence completion tools,” says Naren Ramakrishnan, the Virginia Tech Thomas L. Phillips Professor of Engineering and director of Tech’s Sanghani Center for Artificial Intelligence and Data Analytics.

“They sound so realistic and so compelling because they have been trained or learning on a ridiculous amount of data,” enabling the AI engines to learn which words make sense in context, he explains.

OpenAI’s ChatGPT became a household word shortly after OpenAI released a demo of the conversational AI platform on Nov. 30, 2022. ChatGPT is capable of performing many of the same tasks as human knowledge workers — ranging from drafting emails, business letters, reports and marketing materials to performing paralegal duties, writing computer code, putting data into spreadsheets and analyzing large amounts of data — and it can produce finished work in as little as one second to a few minutes, depending on length and complexity. In March, OpenAI released an updated model, ChatGPT-4, available to subscribers. GPT-4 scored better than 90% of human test-takers on the Uniform Bar Exam, the standardized bar exam for U.S. attorneys.

Generative AI has garnered huge investments. Microsoft has reportedly invested $13 billion in OpenAI since 2019, and Amazon announced in September that it would invest up to $4 billion in Anthropic, an OpenAI rival that has also received $300 million in funding from Google.

In a survey of 1,325 CEOs released in early October by KPMG, 72% of U.S. CEOs deemed generative AI as “a top investment priority,” and 62% expect to see a return on their investment in the tech within three to five years.

Generative AI is developing at a blistering pace. On Sept. 25, OpenAI released a version of ChatGPT that can listen and speak aloud. It’s also able to respond to images.

AI is already changing the work landscape, says Sharon Nelson, president of Fairfax-based cybersecurity and IT firm Sensei Enterprises. “It’s a bolt of lightning. … We’re seeing it go at the speed of light, and I can only imagine that it will go faster still.”

McGuireWoods is providing training on AI basics, ethical use and prompt engineering, says Peter Geovanes, the firm’s chief innovation and AI officer. Photo courtesy McGuireWoods/AI background illustration by Virginia Business staff

Power players

As the tech has quickly progressed, large Virginia companies have formally adopted AI tools and are creating standard AI training policies and processes for their employees.

Reston-based Fortune 500 tech contractor Leidos is providing varying levels of training for employees based on their needs, ranging from those who need to build awareness of AI to subject matter experts. Leidos builds curricula with a mix of external courses
from suppliers like Coursera and in-house content,
says deputy chief technology officer, Doug Jones.

Like many companies, Leidos is creating an internal AI chatbot, although the company also plans to offer it to customers. The chatbot will focus on IT and software questions, allowing workers to search for answers specific to the firm.

Businesses with troves of documents can easily adapt an LLM to be specific to their documents and processes, Ramakrishnan says: “I’m noticing everybody wants to create their own LLM that’s specific to them that they can control. Because they certainly do not want to send their data out to OpenAI.” Because ChatGPT learns from its interactions with humans, information entered into the tool could be shared with another user.

Businesses are also taking advantage of generative AI tools built specifically for their industries.

Virginia’s largest law firm, Richmond-based McGuireWoods, is beginning to use CoCounsel, an AI tool designed for attorneys and built on GPT-4 that should allow attorneys to enter client data securely in the near future. Thomson Reuters acquired CoCounsel’s developer, Casetext, in April for $650 million in cash.

CoCounsel has a range of uses, like drafting a discovery response or helping an attorney brainstorm questions for a specific deposition. An attorney preparing to depose an expert witness could feed the tool the expert’s published papers and ask it to summarize them or ask it whether the expert has ever taken a position on a particular subject in them, explains McGuireWoods Managing Partner Tracy Walker.

A widening reach

ChatGPT isn’t always a reliable source, as it sometimes can fabricate detailed answers, a phenomenon referred to as “hallucinations.” One attention-grabbing misuse of ChatGPT that demonstrated this problem occurred when lawyers representing a client in a personal injury case against Avianca Airlines cited six fabricated cases as legal precedent, based on research using ChatGPT. A federal judge fined the firm — Levidow, Levidow & Oberman — and two lawyers $5,000 apiece.

Walker stresses that responsible attorneys will look up and read cases cited by an AI chatbot, but CoCounsel also provides a safeguard, says Peter Geovanes, McGuireWoods’ chief innovation and AI officer: It’s been instructed not to provide an answer if it does not know it.

McGuireWoods is taking a two-phased approach to CoCounsel’s rollout. The first phase, which started in September and is running through the end of the year, is a pilot program with about 40 attorneys. While Casetext completes its security review of CoCounsel, McGuireWoods’ pilot group is using only public data to test hypothetical uses of the tool. Starting in early 2024, McGuireWoods’ phase two testing will likely expand to about 100 attorneys.

In the meantime, Geovanes is leading foundational training about generative AI. The firm’s first brown bag webinar session was set for Oct. 17. Although the curriculum is designed for attorneys, recordings will be available for any interested employee. McGuireWoods also plans to offer outside courses about the responsible and ethical use of generative AI.

For attorneys selected for the pilot program, the firm will also offer specialized training from Casetext on “prompt engineering” — how to phrase questions to the chatbot to get the desired responses.

In Roanoke and the New River Valley, Carilion is preparing to pilot a new layer of an existing AI-powered transcription tool built for clinicians. The system has used Nuance’s Dragon Medical One, which transcribes clinicians’ notes as they speak, for “a number of years,” Morgan says.

Microsoft purchased Nuance for $19.7 billion in March 2022. In March 2023, Nuance launched Dragon Ambient eXperience (DAX) Express (now DAX Copilot), which is based on GPT-4. It listens to a clinician-patient conversation and drafts clinical notes seconds after the appointment. Morgan hopes to begin piloting DAX in the first quarter of 2024. Because they’ve used Dragon, practitioners likely won’t need much training to adjust to DAX, he says.

Additionally, Carilion is participating in a pilot test of an AI component in the MyChart patient portal offered by Carilion’s electronic medical records vendor, Epic. The AI tool is designed to draft responses to patient questions sent through the portal, taking into account a patient’s medications and medical history. Six Carilion practitioners are participating in the pilot, which started in September, receiving on-the-fly training from Epic and providing feedback.

Examining new terrain

Smaller Virginia companies with fewer resources seem to have taken a more cowboy approach to the new AI frontier, setting ground rules before encouraging employees to explore generative AI tools on their own.

Will Melton, president and CEO of Richmond-based digital marketing agency Xponent21, is also leading a regional work group focused on preparing Richmond’s workforce for AI. Xponent21 initially used Jasper, an AI software tool for writing and marketing, but the firm now uses ChatGPT for tasks like information analysis and developing initial copy, which then goes through human editors.

“I think that the biggest thing that these tools give us is freeing up time that is … spent on monotonous activities that don’t have a lot of value,” like helping employees spend less time writing social media posts or blogs and more time speaking with clients, he says.

Ben Madden, board president for the Northern Virginia Society for Human Resource Management, has begun using ChatGPT in his HR consulting work, asking the AI tool to draft job descriptions and synthesize information for presentations and policy documents.

“Having it be able to do tasks that may take longer without having the support of supercomputers behind it is where I continue to probably see it being used and being able to make my life easier as either a business owner or even for my clients,” says Madden, whose one-person consultancy, HR Action, is based in Arlington County.

Another Richmond-based business beginning to adopt AI is accounting firm WellsColeman, which updated its internet acceptable use policy to include guardrails for AI and ChatGPT usage, like prohibiting employees from entering client data into the platform.

Nevertheless, the firm has encouraged its employees to get familiar with ChatGPT, says Managing Partner George Forsythe. In full firm meetings, leadership will sometimes demonstrate how they’ve recently used ChatGPT, and staff can ask questions or discuss possible uses.

“We’re using [ChatGPT] as an initial step in gaining familiarity with areas that are not part of our everyday expertise. It’s an easy way to get a broad brush on any topic area,” says Forsythe. After verifying the information given, staff can use it as a starting point for their research.

Forsythe has consulted ChatGPT with general management questions like how to communicate with an employee having leadership challenges and has also used it as a marketing aid.

“When it comes to selling our services, I’ve asked it to put together a proposal and make it intriguing and have a hook,” Forsythe says, and he’s been pleased with the results.

Similarly, Winchester-based accounting firm YHB is using generative AI tools for marketing questions that aren’t firm-specific.

“Our team uses [ChatGPT] a ton to help understand and interpret tax laws and information like that,” says Jeremy Shen, YHB’s chief marketing officer. They’ll also ask the chatbot if a website post will have a high search engine optimization score.

The firm is working on selecting an AI tool to formally implement, whether ChatGPT Enterprise, Microsoft’s Copilot or another. For now, “we just kind of said, ‘We know you’re using it. We know people are using it. Here’s some guardrails … but discover and let us know if you come up with something useful,’” Shen says.

Carilion Clinic is participating in a pilot for an AI feature being tested by the health system’s electronic medical records vendor, says Dr. Steve Morgan, Carilion’s senior vice president and chief medical information officer. Photo by Don Petersen/AI background illustration by Virginia Business staff

The new steam engine?

Out of 31,000 people surveyed across 31 countries, 49% are worried that AI will replace their jobs, according to a Microsoft survey released in May. That same month, a CNBC/SurveyMonkey poll found that 24% of almost 9,000 U.S. workers surveyed are worried that AI will make their jobs obsolete.

It’s not an unfounded fear. In 10 years, AI automation could replace about 300 million full-time jobs, according to a March report from Goldman Sachs researchers, but it could also raise the global GDP by 7%, or nearly $7 trillion. In May, IBM CEO Arvind Krishna said AI could replace up to 7,800 jobs — 30% of the company’s back-office workers — over five years.

A refrain commonly heard among AI’s proponents is, “AI won’t take your job, but someone who knows how to use AI will.” It’s paraphrased from a statement made by economist Richard Baldwin, a professor at the International Institute for Management Development, during the 2023 World Economic Forum’s Growth Summit.

“I see some paralegals perhaps being replaced by AI, and only some, because there are some paralegals that have other advanced skills as well,” says Nelson with Sensei Enterprises, who is also an attorney and former president of the Virginia State Bar. Lawyers who do simpler tasks like drafting wills or divorce contracts might be vulnerable to being supplanted by AI, too, she says.

Comparisons to prior technological advances abound. “When the world switched from horse-drawn transport to motor vehicles, jobs for stablehands disappeared, but jobs for auto mechanics took their place,” Federal Reserve Board of Governors member Lisa D. Cook said in a September speech at a National Bureau of Economic Research conference. Workers’ adaptability will depend on their “portfolio of skills,” she said.

Supporters say AI will make employees more productive, which can help industries weather labor shortages and let workers put aside rote tasks to focus on higher-level work, which could increase their job satisfaction.

In the world of government contracting, the constraints on some workers, like getting security clearances and working in-person in a classified environment, can make hiring difficult, says Leidos’ Jones.

“We actually find sometimes we can take some of the tasks that are not as engaging for our own employees [like data entry] … off their plate, and they can spend more time doing the things that are really powerful and unique to humans,” he says.

Forsythe also sees AI as an aid to staff: “Right now, the war is for talent. … If we can’t find more people, one of the things we can do is try to change their roles … and support them in manners that make their jobs easier, not so that way they’ll do more work, but so that way they remain part of the firm and don’t feel overburdened,” he says.

Or it could just improve workers’ quality of life. In an early October interview with Bloomberg Television, JPMorgan Chase CEO Jamie Dimon predicted that time savings from AI could result in a universal 3.5-day workweek — though he also said that he anticipates that AI will result in lost jobs.

While AI will eliminate jobs, it will also create them, experts say. The Washington, D.C., region had almost 1,800 listings for AI-related jobs at the end of August, according to Jones Lang LaSalle. Leidos and Boeing were among the companies with the most openings for AI positions.

New roles are emerging, like “prompt engineers” who develop and refine prompts or queries for AI tools to get the most valuable and appropriate responses. At the end of September, OpenAI rival Anthropic was seeking a “prompt engineer and librarian” hybrid position in San Francisco with a salary range of $250,000 to $375,000.

“The people who study the future of work, they say that certain jobs will go away,” Ramakrishnan says, “… but then there will probably be new jobs created that we don’t know yet.