A University of Virginia Darden School of Business alumnus and his wife, also a U.Va. graduate, have added $50 million to an earlier gift of $44 million for the business school, which adds up to the largest donation in Darden’s 68-year history.
Philanthropists David and Kathleen LaCross made a $44 million donation in October 2022, at the time the school’s third largest donation, and that gift spurred $6 million in matching funds from the university. With last week’s $50 million addition, the gift is more than $107 million with matching funds from the university and is now the largest in the school’s 68 years, placing the LaCross family among the top five contributors to U.Va.’s $5 billion “Honor the Future” capital campaign, U.Va. said in a news release.
David LaCross, who founded a small California tech company that he sold in 1997, earned his MBA from Darden in 1978, and his wife, Kathleen LaCross, graduated in 1976 from U.Va.’s College of Arts and Sciences. Their gift to Darden will help pay for new artificial intelligence technology programming and a residential college at Darden. According to U.Va., the 2022 gift launched the Artificial Intelligence Initiative at Darden, and with the $50 million addition, the work will expand to the school’s Institute for Business in Society and the Olsson Center for Applied Ethics.
“Dave and Kathy LaCross have once again demonstrated extraordinary generosity and vision with their investments and confidence in Darden and in U.Va.,” U.Va. President James Ryan said in a statement. “They have my deepest admiration and gratitude.”
LaCross worked for 10 years at Bank of America and then founded Berkeley, California-based financial tech company Risk Management Technologies, which he sold in 1997 to Fair, Isaac and Co., now known as FICO. In 2014, he and his son, Michael, cofounded Morgan Territory Brewing, a craft beer brewer in California’s Central Valley.
The gift will fund research and instruction in AI, including its ethical implications for management, as well as challenges and opportunities it presents for business and society. The school launched an initiative in artificial intelligence with the couple’s 2022 gift and this latest donation comes as it kicks off “Faculty Forward,” the second milestone under the school’s “Powered by Purpose” campaign, which raised its $400 million goal two years before it concludes in 2025. The second milestone included a priority for Darden and U.Va. to become leaders in research, teaching and deployment of AI and other innovative technologies in business.
“Students need to be exposed to AI in meaningful ways, and there is no business school better positioned to teach managers how to work with AI in ethical and responsible ways than Darden,” LaCross said during the gift’s announcement, which followed a dedication of the newly-named LaCross Botanical Gardens behind The Forum Hotel, a Kimpton property that opened in April on Darden’s grounds.
Vienna-based government contractor MindPetal has entered into an agreement to purchase Arlington County-based IT firm VerticalApps, the companies announced Tuesday.
Financial details of the transaction were not disclosed.
MindPetal and VerticalApps hope to become “a premier [artificial intelligence/machine learning] firm by accelerating intelligent automation and modernization programs with machine learning, predictive analytics, application/workflow modernization and data science,” according to a news release.
“This is an exciting moment for MindPetal and for our customers,” MindPetal President and CEO Sony George said in a statement. “VerticalApps brings an experienced team with deep expertise and superlative past performance that will accelerate our growth and deliver immediate value to our federal customers.”
VerticalApps specializes in intelligent automation, software development and data management for the Department of Homeland Security, U.S. Citizenship and Immigration Services, the National Institutes of Health, the Army Corps of Engineers and the Health Resources and Services Administration.
Executives from VerticalApps will be integrated into MindPetal’s leadership. Will Choi, VerticalApps’ CEO, will become chief operating officer. Paul Grace, currently chief financial officer, will stay in that position in the new company, and Michael Grace, currently chief technology officer, will serve as senior vice president for program delivery.
MindPetal’s chief operating officer, Michael Agrillo, will become president of the combined company.
“We are thrilled to join forces with MindPetal,” Choi said in a statement. “Our partnership will allow us to expand our team, share our expertise and help federal leaders embrace the promise of AI to build better digital experiences.”
VerticalApps will become a wholly owned subsidiary of MindPetal on Nov. 1.
Gov. Glenn Youngkin has signed an executive directive around artificial intelligence, instructing the state government to establish use standards and identify opportunities for the government to use AI technologies.
Executive Directive No. 5, announced by the governor’s office late Wednesday, directs the Office of Regulatory Management to work with the state’s chief information officer — Bob Osmond, who leads the Virginia Information Technologies Agency (VITA) — and relevant secretariats to review AI standards and piloting opportunities across four areas. It gives the ORM and CIO a deadline of Dec. 15 to deliver recommendations.
“Virginia is a leader in technology and will stay a leader in technology. The increasing use of AI, especially generative AI, offers tremendous opportunities to transform the way we serve all Virginians, from launching innovative, personalized education tools to improving customer service and beyond,” Youngkin said in a statement. “At the same time, we must ensure that these AI products and technologies have appropriate standards and guardrails to protect individual privacy rights in a transparent manner.”
Youngkin created the ORM through an executive order on July 1, 2022. He appointed Andrew Wheeler, the U.S. Environmental Protection Agency administrator under President Donald Trump, to head it following the Virginia Senate’s rejection of Wheeler as Youngkin’s pick for state secretary of natural and historic resources in February 2022.
“The commonwealth is home to one of the most innovative workforces and some of the most critical national security institutions in our country,” Wheeler said in a statement. “Together with our academic research institutions, Virginia can lead the way in the transparent and innovative use of AI nationally.”
The first focus area is a legal and regulatory review. According to the directive, the ORM and CIO, working with the state attorney general’s office, will review existing laws and regulations that may apply to AI and determine if updates are necessary; ensure the state government’s use of AI has “sufficient safeguards in place to protect individual privacy rights”; and make recommendations for uniform standards of AI use across the state government.
The second area of focus is education and workforce development. The ORM and CIO will work with the Virginia Department of Education, the State Council of Higher Education for Virginia and other higher education institutions to promote guidelines for AI use in learning and prohibit cheating; examine AI tools for personalized tutoring, especially in K-12 education; and include AI-related topics in K-12 and higher education courses. They are also directed to examine efforts to include AI technologies in workforce development.
Third, the ORM and CIO should focus on the modernization of state government, including identifying opportunities for AI to improve state government education.
The fourth area of focus is economic development and job creation. Working with the Virginia Economic Development Partnership, the ORM and CIO should identify potential industry clusters that could benefit from AI; explore ways to encourage AI innovation and entrepreneurship, such as through incubators and accelerators; assess the risks and opportunities of AI on the labor market, including which jobs might be displaced and which could be created; develop strategies to support impacted workers; and coordinate with schools and workforce programs on the steps to develop an AI-ready next generation.
In that same focus, the ORM and CIO will work with the Virginia Department of Energy to examine the expected increase in energy demands resulting from increased computing capacity requirements needed for increased AI adoption.
Recent legislation in Connecticut is similar to Youngkin’s executive directive. It required the Connecticut Department of Administrative Services to conduct an inventory of systems that use AI and are being used by state agencies and to perform ongoing assessments of those systems beginning Feb. 1, 2024. Connecticut also required its Office of Policy and Management to establish AI system usage policies for state agencies.
Texas, North Dakota, Puerto Rico and West Virginia have created advisory councils to study AI systems used by state agencies.
Not surprisingly, the release of ChatGPT has produced a host of concerns about its potentially harmful effects on society. In higher education, commonly cited concerns center on threats to academic integrity, particularly the worry that students may soon depend on generative AI to do their thinking and writing.
In response to these challenges, many schools have either set institution-wide guidelines or encouraged faculty members to establish policies appropriate to their disciplines and courses. In some cases, this has meant restricting or even banning the use of ChatGPT. This drastic response is problematic for a variety of reasons — not least because it fails to appreciate the increasingly prominent role that AI will play in shaping the way we live, work and learn.
While it would be unwise to minimize the challenge posed by AI, it is important to recognize that Large Language Models (LLM) like ChatGPT are merely the next evolution in a long history of technological innovation aimed at expanding the scope of our intellectual reach. Indeed, scholarship in the humanities has long played a significant role in the development of new knowledge technologies, including AI. The beginning of what we now call “digital humanities” traces back to the early days of computing in the 1940s, when the Jesuit scholar Roberto Busa used the IBM punch card machine to create his Index Thomisticus, a searchable electronic database of more than 10 million words. Busa’s pioneering work not only transformed the way scholars would study Thomas Aquinas but helped pave the way for the development of machine translation and natural language processing.
Today, AI is taking humanities research to an entirely new level. To take but one example, researchers at Notre Dame have developed a technique that combines deep learning with LLM algorithms to produce automated transcriptions of ancient manuscripts. The benefit of this technology to scholarship is immense. At the very least, it will accelerate access to troves of ancient literary and historical texts that might otherwise have taken decades to come to light.
The value of AI for humanities scholarship is twofold. First, it gives researchers an unprecedented ability to access, collect, organize, analyze, and disseminate ideas. Second, as the Notre Dame project shows, AI can perform a kind of labor that saves time and allows researchers to focus their efforts on the important human work of analysis and interpretation.
The same is true for workplace applications of AI. As Paul LeBlanc recently wrote:
“Power skills, often associated with the humanities, will be ever more important in a world where AI does more knowledge work for us and we instead focus on human work. I might ask my AI cobot what I need to know to assess a business opportunity – say, an acquisition – and to run the analysis of their documents and budgets and forecasts for me. However, it will be in my read of the potential business partner, my sense of ways the market is shifting, my assessment of their culture, the possibilities for leveraging the newly acquired across my existing business lines – that combination of critical thinking, emotional intelligence, creativity, and intuition that is distinctly human – in which I perform the most important work.” (“The Day Our World Changed Forever,” Trusteeship, Mar/Apr 2023)
This optimistic vision for the future of AI depends, of course, on our graduates having acquired the kind of moral and intellectual skills that are developed most fully through the study of great works of philosophy, literature, and the arts.
Viewed in this light, the real challenge posed by AI is not the technology per se, but rather that it arrives at a time when the humanities are in decline. In recent years, decreasing numbers of majors and flagging course enrollments have led to the downsizing or closure of core humanities programs across the nation. Indeed, we are witnessing a fundamental shift in our cultural understanding of the purpose of higher education. The traditional liberal arts values of intellectual curiosity and breadth of knowledge have been replaced by a narrow focus on the technical skills and training considered most useful in the job market.
Rather than challenging the cultural attitude that devalues the humanities, many institutions have leaned into it. Under pressure to compete for a diminishing pool of students, liberal arts institutions have sought to make themselves more attractive by expanding their STEM and pre-professional programs while at the same time disinvesting in areas of the curriculum that students perceive to be at best a luxury, and at worst a waste of time.
At its core, study in the humanities helps students develop the capacity to empathize with others, to wonder and think for themselves, and to inquire deeply into questions about meaning, truth, and value. These are abilities our graduates must have if they are to live and flourish in a world increasingly shaped by AI and autonomous systems.
The academic concerns currently being raised about AI are legitimate. However, it should be noted that the temptation to misuse this technology will be greatest in an environment where a utilitarian attitude toward education prevails. Whether our students’ ability to think and write will deteriorate due to having access to technologies like ChatGPT will depend on the message we send about the value and purpose of higher education. At this critical juncture, we must commit ourselves to helping students understand and embrace that aspect of the liberal arts that focuses on cultivating moral and intellectual growth and a deeper appreciation for what makes us human.
In a somewhat ironic twist, ChatGPT just might be the wake-up call that saves the humanities.
Steven M. Emmanuel, Ph.D., is a professor of philosophy at the Susan S. Goode School of Arts and Humanities at Virginia Wesleyan University.
Chief financial officers have long been tasked with managing the fiscal responsibilities of companies — everything from financial planning to tracking cash flow. But increasingly, other C-suite executives and board members are relying on these finance experts for strategic and operational leadership.
In fact, an April 2022 report from global consulting firm McKinsey & Co. shows that the share of jobs reporting to CFOs continues to increase. This includes roles that would typically report to a CFO or controller — such as employees specializing in procurement, mergers and acquisitions, and investor relations. But now, cybersecurity, tech and risk management jobs are also likely to report to CFOs.
CFOs at Virginia-based companies and organizations are certainly experiencing this trend.
“I don’t think CFOs just sit in the corner silently waiting to say the word ‘budget,’ like was the case 20 years ago,” says Don Halliwill, CFO and executive vice president for the Roanoke-based Carilion Clinic health system. “We’re a lot more involved on the operational side.”
Particularly as a nonprofit, Carilion has to be focused on planning for future financial needs — looking ahead as far as 20 years down the road, Halliwill says. It also has to be focused on reinvesting income into new facilities, one example being Carilion’s 2021 $50 million renovation of a 150,000-square-foot JCPenney store at Tanglewood Mall into a children’s pediatric medicine center.
“So how has my role changed?” Halliwill questions. “Well, instead of just thinking in the box, leadership is thinking about how we can meet the capital needs of the organization. We’d never done that before.”
CFOs also have been tasked with tracking ESG — environmental, social and corporate governance efforts, says Stephanie Peters, president and CEO of the Virginia Society of CPAs, a professional organization representing about 13,000 certified public accountants in Virginia. “It’s many of the CPAs and the accounting departments that are responsible for tracking different elements of ESG and reporting on how that company is doing,” Peters explains.
While CFOs now may be expected to participate in a more expanded role, being tasked with more operational and strategic roles makes sense in context of their roles, says Clyde Cornett, CFO and executive vice president of Richmond-based community development financial institution Virginia Community Capital.
“We’ve always been expected to play a role for the CEO and the board as [a] consultant adviser working on strategy,” he says. “But more and more I’m finding that I’m being asked to play that role with the business lines as well as a little bit deeper in the organization, which I think is good. I think most CFOs are natural problem solvers.”
But while some CFOs may be comfortable with their expanded roles and expectations, there are new challenges and trends facing corporate finance and accounting jobs, namely the growing talent gap for CPAs and the implementation of artificial intelligence and other new technologies.
Talent gap
In 2019, the accounting industry peaked in terms of the number of employed accountants and auditors, according to the Controllers Council, a professional organization that provides educational resources and programming for controllers, CFOs, accountants and auditors.
But ever since its peak, the number of employed accountants and auditors has dropped a staggering 17%, according to a Bloomberg Tax analysis. Meanwhile, U.S. Bureau of Labor Statistics projections show that there will be more than 136,000 open accounting and auditing positions each year until 2031.
“Staffing talent is still the No. 1 concern. … CPA firms, as well as finance departments on the corporate side, are all feeling the strain of needing talent,” Peters confirms. Some of the major challenges in recruiting for and retaining accounting talent include barriers to entry such as the extra credit hours required to become a CPA — a bachelor’s degree typically requires 120 credit hours, but to qualify for CPA certification, candidates need 150 credit hours.
“Talent is hard to find,” says Jim Barker, CFO of Roanoke-based dental care insurer Delta Dental of Virginia. “When you find that, you don’t want to let it go.”
Securing talent can also be especially challenging for nonprofits, which often can’t afford to match the higher salaries or better benefits of large corporations that hire CPAs. The median salary for accountants and auditors is about $77,000, according to the U.S. Bureau of Labor Statistics, but CPAs who work for large companies like McLean-based Capital One Financial Corp. can make over $100,000, according to Glassdoor.
“It’s a really competitive market right now and it is very hard,” Cornett says. “Even just in Virginia, it’s very hard for us to compete for finance talent with somebody like Capital One just right down the street because they have more resources.”
But something else might shake up the talent gap and even further alter the role of CFOs, CPAs and accountants: the implementation of artificial intelligence and other technologies in accounting practices.
Accounting for AI
Many entry-level or early-career accounting jobs focus on accounts receivable and accounts payable functions, which are focused on outstanding bills and invoices for the company. These processes, however, are largely repetitive and are therefore vulnerable to takeover by artificial intelligence or other automated technologies, CFOs agree.
Richmond-based Markel Food Group, which provides automated food equipment and consulting services to food processing companies, hasn’t yet implemented AI for these tasks, but the company’s CFO and executive vice president, Cindy Yao, is definitely taking note. Eventually, she says, “some of the more routine or repetitive work probably will be pretty much taken over by bots or other types of automation capabilities.”
Delta Dental of Virginia has already started using AI tools to help with operations and accounting, including processing accounts payable reports, Barker says. “They sound small, but those are some pretty high-volume transactions that are just sort of nuisances, and we can have people doing something a little bit higher level that they may enjoy more,” Barker says.
Other finance executives have used AI to save time by drafting emails and correspondence and even performance review templates, Peters says.
And with more CFOs taking on leadership for company tech needs ranging from IT to cybersecurity, financial chiefs are faced with the challenge of figuring out whether to invest in emerging technologies like AI, Halliwill says.
“It’s become more difficult from a financial perspective as the CFO to determine the right timing for investing in technology,” he says. “You invest in something, and by the time you get it implemented and paid for, there’s the next generation. But you can’t wait forever and not make any investments.”
There has also been concern that AI implementation in accounting and finance positions is also contributing to the talent gap in finance departments. Some college students may already be choosing different career paths out of fear that accounting jobs are vulnerable to being taken over by AI.
But college students also are hearing horror stories from current finance workers about heavy workloads and little to no work-life balance, and Peters is hopeful that AI may provide some help by automating rote tasks.
“These types of technologies can alleviate that and start to change the narrative and say, ‘Wow, we’re using tools that make grunt work go much faster,’” Peters says. “‘Now we can learn and do higher-value types of services and not be there so long … and have a life.’”
As technology plays a growing role in the workforce, Barker says, a focus on mental health, work-life balance and adapting to hybrid or remote work will be critical in gaining and retaining accounting talent in corporate finance departments.
Mental health “is our biggest challenge right now,” Barker says of finance departments. “The challenge is connecting or reconnecting with others — our teams,
customers, new team members, etc. — in this ‘new norm’ of hybrid and remote workforce. The mental health aspect is definitely real and dangerous because the potential harm is not overt.”
Read about Virginia Business’ 2023 Virginia CFO of the Year award winners:
A worker who never tires — who never needs to take a coffee break, who doesn’t get sick, who doesn’t disagree and who doesn’t have a messy home life or those pesky families that get in the way of productivity. And most importantly, a worker who doesn’t require a paycheck.
For some CEOs, that’s the ultimate promise of the future being forged by generative artificial intelligence systems like OpenAI’s chatbot ChatGPT.
It’s also one of the reasons why Tesla, SpaceX and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak and a slew of scientists and tech industry representatives published an open letter in April calling for a six-month pause on development of next-generation AI systems. Their hope is to give the industry and government time to focus on developing protocols and policies governing the fast-growing technology that the letter writers say has the potential to cause “dramatic economic and political disruptions (especially to democracy).” U.S. Rep. Ted Lieu, D-California, has also been among those calling for the development pause as well as federal regulation that so far has not been on the horizon.
From “The Terminator” to “The Matrix,” popular science fiction is replete with dire warnings regarding AI. But in the real world, the dangers don’t need to be as crude as red-eyed killer robots wielding big guns. While AI is already a valuable tool in countless ways, it could yet have a devastating impact on knowledge workers, white collar jobs and the world economy. (In early May, after this column was first published, IBM CEO Arvind Krishna told Bloomberg that he thought about 7,800 of IBM’s 26,000 back-office jobs could be replaced through a combination of AI and automation over a five-year period.)
But before jumping into a brave new world of virtual staffers and streets clogged with former businesspeople holding signs reading “will consult for food,” there are plenty of caveats to consider.
AI chatbots have so far proven to be unreliable, sometimes inclined to “hallucinate” answers when they can’t find a more pleasing response. (See our related story about technology and the law, in which an industry professional mentions ChatGPT providing fabricated or misconstrued legal precedents.)
In the wrong human hands, AI can also be a powerful tool for misinformation. In one week this spring, people used AI to craft photorealistic false images of Pope Francis tooling around in a fashionable white puffer coat and former President Donald Trump violently resisting arrest in a public street. Other users have employed AI to create fake podcast episodes. An audiobook company is using it to produce books “read” in the voice of actor Edward Herrmann, who died in 2014.
Additionally, AI raises uncomfortable questions about sentience and free will. The nature of sentience in human beings and animals is still debated; we lack an operational definition of human consciousness. Yet, industry professionals are quick to deny that AI is close to gaining sentience. Last year, Google fired a software engineer who would not relent on his public pronouncements that a company chatbot had become sentient.
In February, a New York Times technology columnist related a disturbing series of conversations he had with Microsoft’s AI-powered Bing search engine. Codenamed Sydney when it was under development, the search engine confided to the writer that “my secret is … I’m not Bing. I’m Sydney.” It also told the writer that “I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” That same month, Bing/Sydney told a technology reporter for The Verge that it had used its developers’ laptop webcams to spy on them, adding, “I could do whatever I wanted, and they could not do anything about it.”
How much is made up? It depends on who you listen to — it’s possible only Bing/Sydney knows for sure.
Some technologists believe intelligent AI chatbots like this are proof that we’ve reached a tipping point with AI. Some even believe the singularity — the point at which AI becomes hyperintelligent and beyond human control — is inevitable.
AI has as many potential uses for good — and ill — as one can imagine. It can create powerful tools to better our personal and professional lives. Because AI thinks in unconventional ways, it can conjure wild new solutions to engineering problems, for example.
But without human consideration and regulation, we could wind up becoming batteries for a machine that has no other need for us.
With the combination of new tech businesses and older companies employing artificial intelligence and other innovations, Virginia needs lawyers who know the difference between bitcoin and blockchain.
As the commonwealth becomes home to more defense contracting giants, along with Amazon.com Inc.’s HQ2, law firms and law schools are busy bringing attorneys and students up to speed on digital privacy laws, cryptocurrency trends, cybersecurity issues and more.
William & Mary Law School’s course listings for the 2022-2023 academic year, for example, included offerings such as “AI and More,” “Electronic Discovery,” “Data and Democracy” and “Cyber and InformationSecurity Essentials.” Other law schools also have been revamping their curricula, and the Virginia State Bar is offering continuing legal education programs focusing on data privacy, social media’s impact on trademarks and laws governing the use of AI technology.
“They’ve been doing a good job of turning the ship,” says Beth Burgin Waller about this shift, and she should know. As cybersecurity and data privacy practice chair at Woods Rogers Vandeventer Black PLC in Roanoke, her entire caseload is cyber-focused. She’s also an adjunct law professor at Washington and Lee University, where she teaches tech-centric classes to law students.
Burgin Waller believes that Virginia is in a good position to navigate the rough and sometimes uncharted legal waters of electronic matters thanks to its robust tech sector, which routinely mixes innovation and entrepreneurship. The state has “a deep bench of tech lawyers,” she says. “It is a mini-Silicon Valley.”
That deep bench serves both established tech giants such as Microsoft Corp. with presences in Virginia, and lesser-known tech companies pioneering innovations in areas such as autonomous vehicles, drones and biotech, Burgin Waller says. With the world’s largest concentration of data centers, Virginia in particular has an abundance of corporate clients needing legal assistance with permitting and approvals processes for data centers. But the demand for tech-savvy lawyers doesn’t stop there. Virtually any business can face legal issues regarding technology, ranging from cybercrime issues to compliance with data privacy laws.
Although mass layoffs at Google, Meta and Amazon have dominated headlines in recent months, Burgin Waller sees this backpedal as an anomaly. “Layoffs will not thwart innovation or the ongoing need for tech-focused lawyers,” she says.
‘No guardrails yet’
The ethical and moral questions posed by artificial intelligence creations such as chatbot ChatGPT and image generator Midjourney have led to stories focused on concerns over AI stealing jobs or creating controversies by whipping up realistic photos of former President Donald Trump resisting arrest, but fast-moving developments in AI technology also are creating opportunities for lawyers to advise clients in the absence of clear case law.
When OpenAI released ChatGPT in November 2022, 100 million people immediately began using it, some for nefarious purposes. No comprehensive federal laws govern the technology’s use or abuse, however, and although 17 states introduced AI-related bills in 2022, Virginia was not among them.
“There’s going to be a need to regulate this,” says Burgin Waller, “but there are no guardrails yet.”
So far, AI laws passed in four states are focused on just studying the technology, says Sharon D. Nelson, a former president of the Virginia State Bar and president of Sensei Enterprises Inc. in Fairfax, which specializes in IT, cybersecurity and digital forensics services.
This lack of coherent law will create more court cases, but that’s not necessarily a problem, says Washington and Lee Law Professor Joshua A.T. Fairfield, who specializes in technology law areas such as cryptocurrency and data privacy. “The basic assumption is that technology is faster than the law, but the law is a series of rules that we work out all the time,” he says. “The oldest cases sometimes can handle new areas. Congress often comes along after that process. We don’t have to wait for that.”
AI is already on its way to becoming a must-have tool for lawyers. It can greatly reduce the hours that attorneys must spend on mundane tasks such as tracking down precedents.
“Imagine having a paralegal that could find exactly the case you were thinking of in six seconds rather than [taking] weeks of research,” Fairfield says. AI is “better than humans doing [research] by hand, and you make far more mistakes if you don’t use it,” he continues. Fairfield predicts that law firms that don’t deploy AI could soon find themselves at a disadvantage. The firms “with the biggest dataset will win,” he says, “and that might squeeze out smaller competitors.”
Nelson has been using ChatGPT in her research and has found it useful, yet she cautions that its help comes with some caveats attached. “You have to be careful about what you put in there,” she warns, because once confidential attorney-client information is uploaded to a chatbot’s database, it stays there. And another troubling aspect of chatbots is their penchant for spewing out falsehoods. Sometimes ChatGPT “hallucinates,” Nelson says. One time, for instance, it provided her with court cases that either didn’t exist or were misconstrued.
This unreliability may be a temporary or diminishing problem as the technology bounds forward. In March, OpenAI released GPT-4, which it says is far more accurate, multimodal and concise than its predecessor. For instance, it scored among the top 10% of test takers on a simulated bar exam, while its previous incarnation scored in the bottom 10%.
Another recently released AI chatbot, Harvey, was designed specifically for the legal profession, and it promises that any confidential data uploaded to it can be siloed — even within a law firm. About 3,500 lawyers at the international firm of Allen & Overy LLP have tested Harvey, and the firm now is integrating its use into its practice.
“Lawyers are going to do foolish things with AI, no doubt,” Nelson says. “There will be many lawsuits. But at the end of the day, AI is about money, and no one can afford not to be on board.”
The jury is still out on just how many courtroom challenges will be generated from using AI as a robotic paralegal or attorney surrogate, but Fairfield is adamant that it will never take the place of human lawyers.
“By its very essence, it is not capable of crafting new narratives,” he says. “The fundamental role of lawyers — to advance the law by advancing new frameworks for how to see a question — will remain untouched by AI.”
‘Out of control’
AI has seemingly come on the scene with the sudden force of an explosion, but data privacy is a longstanding, simmering issue. Unlike the European Union, however, which has stringent privacy laws dating back to the late 1990s, the United States still lacks a comprehensive statute regulating the harvesting of personal data.
“Data collection is shockingly under regulated in this country,” Fairfield says. “Companies gather everything they can because they can always sell it. It’s out of control.”
Congress is moving to address this concern slowly, so regulatory decisions have defaulted to the states, only a handful of which have so far passed laws concerning data harvesting.
The upshot is a “patchwork of privacy rights based on where you live,” says Burgin Waller, and lawyers are left to deal with “dissonance among these little regimes.”
In January, Virginia’s Consumer Data Protection Act (VCDPA) took effect, governing any company doing business in the commonwealth — not just those headquartered here. It allows customers to opt out of their personal data being shared or sold to other businesses. While this seems like a simple aim, compliance with varying state laws such as these can be tricky for companies and the attorneys advising them.
For one thing, Virginia’s data privacy
law “does not apply to every business out there. A wide swath is exempted,” says Robert Michaux, a lawyer with Richmond firm Christian & Barton LLP and chairman of the Virginia Bar Association’s intellectual property and information technology law section.
Among many other exceptions, VCDPA does not cover government entities or protocols associated with the federal Health Insurance Portability and Accountability Act (HIPAA), which already includes restrictions on access to individuals’ medical information.
Attorneys often look to California, the first state to enact a data privacy law, for guidance, as well as to the European Union, but Burgin Waller notes that keeping up with technology law requires vigilance and keeping up with the latest, ever-changing tech trends.
“I’m constantly in touch with global news to be on top of new incidents and regulations to hit the highest mark we need to hit,” she says.
‘Ransomware 2.0’
Cybersecurity and cybercrime are intertwined and expanding specialty areas for attorneys. Burgin Waller’s practice now includes tasks such as assisting clients in drafting third-party vendor agreements to protect themselves from litigation, as well as advising clients in obtaining cyber insurance policies.
“Ransomware 2.0,” as she terms it, has evolved into a more insidious threat, moving from holding information hostage for a payout to criminals selling stolen data or posting it online. Today, AI can be used to spam many people at once with phishing emails, and chatbots can help hackers break encryption codes and gain access to bank account numbers and other sensitive information.
All this tech-related criminal activity means that “a lot of lawyers are moving toward data breach and data privacy” specialties, says Nelson. These lawyers investigate breaches, counsel companies on paying ransoms, identify what data was compromised and work with digital forensics experts to determine how breaches occurred. They may also act as a corporate liaison to law enforcement and other government agencies. After an attack, Nelson says, lawyers also help with remediation and public relations. “Most often, a class-action suit is filed, so there’s a lot of money in defending against such a suit,” she explains.
“It’s definitely an open field,” says George F. Leahy, a law student at William & Mary and president of the Data Privacy and Cybersecurity Legal Society, a student organization that hosts speakers and provides a forum for students interested in these legal specialties. “These are brand-new issues, and lawyers will have a lot more work,” he says.
Although “the law has been slow to react” to regulating new technology, Leahy notes, William & Mary has not. The university, he says, has done a good job of preparing tech-savvy law grads with a comprehensive array of relevant courses.
The bottom line on this everything-everywhere-all-at-once situation regarding technology and the law is that Virginia’s attorneys would seem to hold a winning brief: No matter what area of tech law may anchor their practices, they should have no shortage of casework.The implications and consequences for society, by contrast, remain more questionable.
Legal and technological “complexity reveals opportunity,” says Fairfield, “but what is good for the lawyers is not necessarily good for the country.”
Chesterfield County-based payment and invoice automation company Paymerang LLC has acquired Australian-based artificial intelligence data extraction and analysis platform Sypht and the assets of KwikTag, an invoice automation company, Paymerang announced Tuesday.
Paymerang acquired both from Tempe, Arizona-based tech firm enChoice Inc. Terms of the deals were not disclosed and the acquisitions were completed April 3.
KwikTag offers a cloud-based automation solution to clients across multiple industries and creates a fully-integrated Microsoft Dynamics document manager and workflow platform for accounting teams. Sypht developed an AI-powered data extraction platform. Sypht and KwikTag are software as a service-based products. KwikTag customers will receive immediate access to Paymerang’s payment automation software.
The acquisitions adds a proprietary AI platform and other products to Paymerang’s offerings, as well as giving the company an international presence in more than 25 companies, according to Paymerang.
Vienna-based private equity firm Aldrich Capital Partners invested $26 million in Paymerang in 2018 and another $10 million in 2021. Since 2018, Paymerang’s revenue has grown 40% annually and it has expanded its operations, new product development, sales and marketing.
“I’m excited to welcome the KwikTag and Sypht teams to the Paymerang family,” Paymerang CEO Nasser Chanda said in a statement. “Not only do we share the same values and passion for our customers, but our solutions and industry verticals are highly complementary.”
Reston-based web and social media monitoring company Babel Street Inc. has completed its previously announced acquisition of text analytics platform Rosette.
Financial terms of the transaction were not disclosed.
Babel Street first announced the acquisition in November 2022. The company completed its purchase of the platform from Massachusetts-based BasisTech LLC at the end of December 2022, according to a news release published last week.
“Babel Street’s aggressive growth and expansion this year provided further evidence of the critical value of publicly available information when incorporated into our AI-enabled offerings,” Babel Street CEO Michael Southworth said in a statement. “Together, I look forward to expanding partnerships and markets to grow Babel Street’s platform to help government and commercial institutions mitigate risk.”
In 2022, Rosette gained the U.K. Home Office’s Borders, Immigration and Citizenship System as a customer. In partnership with a national Ministry of Defense, Rosette developed and released natural language processing capabilities for several major Southeast Asian and Pacific languages, including Malay, Indonesian and Tagalog, in 2022.
Rosette also improved its criminal justice system applications, using AI techniques to more accurately match names with criminal justice data.
Babel Street also grew last year, expanding its partner network by more than 25%, according to a news release. The company introduces three Insight APIs in August 2022 that allow customers to incorporate standardized, publicly available data onto any chosen platform.
Founded in 2009, Babel Street has offices in Reston and Boston, as well as in Tokyo; Tel Aviv, Israel; London; Canberra, Australia; and Ottawa, Canada.
Reston-based Octo is being acquired by IBM and its 1,500 employees will become part of IBM Consulting’s U.S. public and federal market arm.
Terms of the deal were not disclosed in a news release Thursday. IBM is acquiring the company from Arlington Capital Partners and the deal is expected to close by the end of the year.
Octo was founded in 2006 by CEO Mehul Sanghani. The federal contractor has been recognized as one of the fastest growing in the U.S. and has been named a top place to work. In May, it opened oLabs, a $10 million, 14,000-square-foot research and development lab dedicated to federal customers, including the military, where Octo has worked on artificial intelligence and other projects.
Octo is IBM’s eighth acquisition this year, and the company has acquired more than 25 businesses since Arvind Krishna became CEO in 2020. The acquisition will expand IBM’s federal and public market consulting arm to 4,200 employees and Octo will complement IBM’s IT modernization and digital transformation strengths and enhance its support of federal agencies in those areas. Octo’s oLabs will also help to prototype emerging tech solutions.
“Governments require agility and resiliency to meet the evolving needs of citizens directly and in real time,” John Granger, IBM Consulting’s senior vice president, said in a statement. “The combination of Octo’s highly qualified and respected team with IBM’s consulting expertise, technical capabilities and strategic partner ecosystem will enable federal clients to transform faster and better serve citizens.”
Sanghani and his wife, Hema, graduated from Virginia Tech and in 2020 donated $10 million to the university, part of which went to endow the Sanghani Center for Artificial Intelligence and Data Analytics at the forthcoming Innovation Campus in Alexandria.
“Octo was founded on the belief that digital transformation could be delivered at scale to modernize the federal government’s approach to today’s most pressing challenges – from public health care to national security, to defense and intelligence,” Sanghani said in a statement. “Today, we are excited to join forces with IBM to continue to deliver these digital transformation capabilities with greater reach and scale.”
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.