|
Welcome
Welcome to our 11th and final issue of 2025 of Decoded -- our technology law insights e-newsletter.
Newest Decoded Author
In addition to a variety of issues we are addressing for this edition, we are welcoming Spilman's Chief Information Officer, James Dunlap. James is an experienced CIO and Governance, Risk and Compliance (GRC) leader with more than 15 years’ experience driving compliance and risk programs across banking, fintech, and healthcare industries. He brings a unique perspective and one that we felt would be of interest to our readers. Welcome, James!
2026 National Labor & Employment Law Symposium, February 1, 2026, Steamboat Springs, Colorado
For those of you interested in labor and employment law topics, please join this exclusive gathering of top national and international labor and employment lawyers for the latest legal updates in a close-knit, collegial atmosphere. More than a dozen sessions, in a roundtable format, will cover cutting-edge labor and employment topics. In between sessions, participants will have plenty of time to enjoy skiing in Steamboat Springs or networking over drinks or dinner. Click here to learn more.
A sincere thank you to everyone who read Decoded this year. As we plan for 2026, please let us know what you would like to see in the new year. Do you have topics you would like us to discuss? A change in the format? Any other suggestions? We welcome all thoughts and suggestions. Email us and we will be sure to discuss the best ways to keep this publication at a high level of value.
Congratulations to Gerald Titus on His Appointment to the West Virginia Supreme Court
We are very pleased to announce that partner Gerald M. Titus III has been appointed by Governor Patrick Morrisey to serve on the Supreme Court of Appeals of West Virginia. He will fill the vacancy created by the tragic passing of Justice Tim Armstead. Michael J. Basile, Spilman’s Managing Member, added, “Governor Morrisey could not have selected a more capable or principled individual. Gerald has been an indispensable member of our firm for many years. His legal acumen, judgment, and character make him exceptionally well-suited for this role, and we are immensely proud to see him reach this milestone in his career.” Click here to learn more.
Nicholas P. Mooney II, Co-Editor of Decoded; Chair of Spilman's Technology Practice Group; Co-Chair of the Cybersecurity & Data Protection Practice Group; and Co-Chair of the Artificial Intelligence Law Practice Group
and
Alexander L. Turner, Co-Editor of Decoded and Co-Chair of the Cybersecurity & Data Protection Practice Group
| | |
“Nearly 40% of respondents identified AI-based frameworks as their top priority for reducing cyber risk over the next three years, the report found, with 23% identifying AI-powered threat analysis and 17% identifying DevSecOps.”
Why this is important: A new report from Amazon Web Services highlights a growing shift among security leaders toward using artificial intelligence not just to detect threats, but to strengthen governance and automate core security functions. Nearly 40 percent of organizations surveyed said AI-based governance frameworks are their top priority for reducing cyber risk over the next three years, with another quarter pointing to AI-driven threat analysis and a rising number investing in DevSecOps practices. Despite this enthusiasm, only about one-third of organizations currently automate their Security Operations Center (SOC) processes, and respondents expressed continued hesitation about shifting sensitive workloads to AI-enabled cloud environments due to privacy and cybersecurity concerns. The trend underscores a broader evolution: AI is moving from a tactical capability to a foundational governance tool. As organizations adopt AI-based automation and analytics, they must integrate these systems into their risk and compliance structures, ensure proper oversight and auditability, and update third-party and cloud-security evaluations to account for emerging AI-specific risks. For security and GRC teams, the takeaway is clear: AI is becoming part of the control environment itself, and governance models need to modernize accordingly. This includes rethinking traditional control testing to account for algorithmic behavior, documenting how AI systems make decisions, and ensuring human oversight remains embedded where regulatory expectations require it. Organizations will also need to establish lifecycle management for AI models, monitor for drift or unintended outputs, and verify that automation enhances rather than obscures incident-response and audit trails. As AI becomes woven into the operational fabric of risk management, proactive governance will increasingly determine whether organizations realize the benefits of AI or introduce new, unmonitored vulnerabilities. --- James E. Dunlap
| | |
“The prevailing assumption is that executives only need high-level briefings, not practical AI training. That’s wrong.”
Why this is important: A recent MIT report, The GenAI Divide: State of AI in Business 2025, found that 95 percent of AI initiatives fail to provide return on investment (ROI) and the culprit appears to be that C-Suite executives making decisions about AI adoption, governance and resource allocation fail to have the practical AI training necessary to do so. This results in a disconnect between what gets purchased and what gets adopted, leading to ineffective execution. Instead, senior leaders should have knowledge and experience with AI platforms to understand how they impact products and services, customer operations, and value chains. They need to be able to analyze and interpret data sets with generative AI tools to generate critical insight and guidance. Finally, senior executives need to establish human-centered leadership skills in order to cultivate trust, lead effectively during necessary change, and create environments where employees welcome redefined roles.
Effective training of C-Suite executives starts with customization tailored to the actual information that senior executives work with and the specific problems they need to solve. Potential areas of impact in AI training include AI coaching of junior employees, connecting the AI tool to CRM data, analyzing financial forecasts and board preparation.
In addition to training, C-Suite executives must understand that AI implementation is a company-wide objective and responsibility. One of the suggested means of accomplishing this goal is for senior executives to lead by example to garner confidence in employees’ use and iteration of AI tools. The companies that will succeed are the ones where senior leaders are knowledgeable and experienced with AI tools so that they understand their functionality, can identify areas of failure or improvement, and encourage experimentation. --- Jennifer A. Baker
| | |
“The signing ceremony took place in Hanoi on October 25, and the terms of the cybercrime treaty will go into force 90 days after at least 40 of the signatories formally ratify it.”
Why this is important: The UN Treaty that supporters tout as a significant advancement in international cooperation to curb cybercrime and facilitate the sharing of electronic evidence when serious crimes are committed across borders is being called a “Trojan Horse” by experts in cybersecurity, journalists, and human rights organizations. Why this dichotomy? One need only consider that it was Russia -- a country well-known to be a cybercrime sanctuary -- that initially pushed for the adoption of such a treaty in 2019 and became the second-signer behind Vietnam and just before China endorsed it, to question what exactly this treaty is designed to accomplish. The answer to that question depends on who you ask, and the answer is far from clear.
Proponents contend that the legislation strikes a delicate balance between what authoritarian countries, on the one hand, and democracies, on the other, will tolerate. The treaty establishes a 24/7 cooperation network and obligates only those countries that ratify the treaty to share electronic evidence relating to cyber-dependent and cyber-enabled offenses. What could go wrong? Plenty -- according to a significant group of cybersecurity, privacy, and human rights organizations, who point to weaknesses in the drafting.
Critics point out the broad scope and vague definitions that pose risks of authoritarian countries using it as a means to better control internet and communication technologies, criminalize online expression, engage in unchecked surveillance activities, promote censorship, and further suppress human rights. They note that the lack of safeguards on how shared electronic information will be used and maintained poses risks to privacy and the potential for data breaches, leading to more cybercrime -- not less. Critics also note that many countries already have cross-border cooperation agreements in place to facilitate data-sharing for legitimate criminal investigations, and that should not be expanded to countries known for widespread human rights violations.
The divide over this treaty is significant. The U.S., Canada, Japan, India, New Zealand, and Israel are among the 121 countries that have yet to sign. In addition to Russia, Vietnam and, China, early endorsers include the European Union and 13 of its member states, along with North Korea, Venezuela, and Nicaragua. --- Lori D. Thompson
| | |
“Despite bubble fears, analysis predicts record spend that will reshape bond markets.”
Why this is important: JPMorgan Chase estimates that more than $5 trillion will be invested over the next five years to build out global data centers, AI infrastructure, and the power systems required to support them. The bank expects 122 gigawatts of new data center capacity from 2026 to 2030, though growth will be limited by major power constraints and long construction timelines for new energy sources.
The scale of spending is expected to draw capital from nearly every financing source, including public markets, private credit, alternative funding, and government support. While major companies like Amazon, Google, and Microsoft are expected to fund a large portion of the buildout through their substantial operating cash flows, JPMorgan estimates a remaining funding gap of roughly $1.4 trillion. Private credit and potential government backing may be needed to fill this shortfall.
The report predicts that high-grade bonds could supply up to $1.5 trillion over five years, while leveraged finance and securitization markets may contribute another $350 billion.
JPMorgan cautions that this investment boom carries risks, including parallels to past tech bubbles such as the telecom overbuild of the early 2000s. To achieve a sustainable 10 percent return on investment, the AI sector would need to generate roughly $650 billion in annual recurring revenue, a challenging threshold. Given the scale and competitive dynamics, the bank expects both major successes and significant failures as the sector evolves.
Smaller AI and data companies shouldn’t try to compete in the massive infrastructure spending race. Instead, they should focus on specialized markets, build defensible IP like proprietary datasets and compliance frameworks, stay lean by leveraging existing platforms, and prioritize efficiency and recurring revenue. Forming strategic partnerships, aligning with government initiatives, and designing products will strengthen stability and acquisition potential. Ultimately, success will come from solving specific problems efficiently. --- Shane P. Riley
| | |
“The data breach took place back in February, but a months-long investigation means the automaker is just now beginning to warn customers about it.”
Why this is important: This latest data breach underscores the growing vulnerability of automakers to cyber threats and the significant, widespread impact such incidents can have.
As car features increasingly rely on connectivity, car manufacturers are at a growing risk of threat actors hacking their systems and stealing the sensitive personal information of millions of drivers. In this instance alone, it is suspected that Hyundai notified approximately 2.7 million drivers whose personal information may have been compromised. Further, the breach involved highly sensitive personal information, including names, social security numbers, and driver’s license numbers.
Hyundai is not alone, as Jaguar Land Rover also suffered a breach earlier this year that resulted in the shutdown of production for weeks and billions of dollars lost in revenue.
The impact of data breaches can be devastating for companies and the individuals impacted. It is so important to take action to protect your company’s data. We, at Spilman, are here to help and can assist you in putting into action much-needed data security policies and procedures. --- Suzanne Y. Pierce
| | |
“The latest research is another step toward a reality where it may be possible to manufacture functional human organs for transplantation, which could be a big deal because the current standard of relying on donors and the right blood match has made transplant wait lists tremendously long.”
Why this is important: There are currently over 100,000 people in the U.S. who are waiting for an organ transplant. Scientists may soon be able to eliminate the wait altogether if they are able to perfect the 3D printing of human tissue. ETH Zurich scientists have developed a method to 3D print human muscle in microgravity. The 3D printing of tissue has to be done in space because gravity adds stress to the materials used in the process and prevents muscle fibers from aligning properly. But scientists are not flying to space to conduct these experiments. They are using parabolic flights to simulate microgravity and printing the muscle tissue when the craft is experiencing weightlessness. This is the latest step toward the printing of human organs for transplant. Thus far, scientists are working on the 3D printing of human retinas, and they have been successful in printing liver tissue. Scientists have even successfully transplanted a 3D printed windpipe. These breakthroughs have the potential to revolutionize medicine and save millions of people’s lives. --- Alexander L. Turner
| | |
“The registry’s purpose is to track and approve state projects that would use AI technology.”
Why this is important: The Virginia Information Technologies Agency (VITA) services 65 executive branch agencies, security, project management, procurement and information technology planning for the Commonwealth and is responsible for creating AI standards. Pursuant to a January 18, 2024, executive order of Gov. Glenn Youngkin, VITA created Virginia’s AI registry for agencies to submit potential AI use cases, or projects, for review and approval. The purpose of the registry is to track state projects that would use AI technology. AI tools Smartling and CoPilot Chat have been made available to all executive branches. The problem emerging since the AI registry went live is that state agencies working to comply with the registry process are complaining of inconsistent process guidance and coordination. Other noted problems include lax case status updates and confusion as to which types of use cases are required to be reported. VITA is working to address some of these frustrations as the registry is a key part of the agency’s approach to oversight in using AI tools at this critical time when AI use is accelerating. Examples of already-implemented AI use in Virginia’s agencies include aiding the Department of Transportation in determining when roads need to be paved, the Department of Quality’s review and approval of water permits in an expeditious manner, and VITA’s program creation for more efficient and improved technology processes. --- Jennifer A. Baker
| | |
“Financial firms should be performing regular oversight of their vendors to avoid supply chain compromises, according to a new report.”
Why this is important: A new BitSight analysis shows a widening and increasingly concerning gap between the cybersecurity performance of financial institutions and the vendors that support them, revealing that suppliers significantly underperform their own customers across nearly every measured domain. The study found that third-party vendors trail financial firms in 16 out of 22 critical security categories -- including patch timeliness, web application security, encrypted communications, open-port exposure, and vulnerability remediation -- sometimes by double-digit margins. Even more concerning, financial firms are only monitoring about 36 percent of their active vendors, leaving roughly two-thirds of their supply chain without direct visibility despite relying on these same vendors for core services ranging from software development to data processing. According to the report, unmonitored vendors are nearly three times more likely to have exposed critical vulnerabilities or validated exploits in the wild, underscoring that visibility alone dramatically improves security performance. Although the financial sector performs better than most industries, where vendor monitoring averages just 25 percent, the scale and complexity of modern financial ecosystems make even small gaps consequential. Regulators such as the Federal Reserve, OCC, FDIC, and SEC have all signaled heightened scrutiny around third-party oversight, emphasizing continuous monitoring, contractual accountability, and tighter governance around vendor-supplied code and managed services. For financial institutions navigating elevated extortion attempts, software supply-chain attacks, and increasingly interconnected cloud platforms, the message is clear: third-party cybersecurity can no longer be treated as an extension of the vendor’s responsibility. It is a core element of enterprise risk. Strengthening oversight programs, enforcing measurable performance standards, and expanding monitoring deeper into the vendor ecosystem are now essential steps to closing the security gap and preventing the next supply-chain compromise. --- James E. Dunlap
| | |
“The technology’s use will lead to short-term cost reductions for the banking sector, but savings won’t last long, according to McKinsey.”
Why this is important: Some banks are betting big on AI and have been investing heavily to drive down operational costs and gain marketplace advantages. The article predicts that the most significant impact AI will have on the banking industry will focus on a human-AI collaborative model with one human employee supervising dozens of AI agents that manage complex workflows. But at least one bank is being cautious for fear that the rush to use AI models may be inefficient, and models may be implemented that are not developed to lead to a tangible increase in operational speed and cost savings. --- Nicholas P. Mooney II
| | |
“The plausible mechanism pathway is designed to help people with rare diseases that are fatal or associated with severe childhood disability, inspired by the story of Baby KJ — an infant with CPS1 deficiency who received the first-ever personalised CRISPR gene editing therapy.”
Why this is important: The FDA is introducing a new regulatory pathway called the plausible mechanism pathway to make it faster and easier to approve personalized medicines designed for single patients, particularly for rare and fatal genetic diseases. Instead of requiring large clinical trials, therapies may be approved based on evidence from only a few well-designed patient cases, using patients as their own controls when appropriate. The initiative was inspired in part by Baby KJ, the first patient treated with a personalized CRISPR therapy, whose treatment showed meaningful clinical improvement. To qualify, diseases must have a clear genetic cause and a targeted treatment approach with demonstrated biological activity before and after dosing. Over time, data collected across multiple individualized treatments can support broader approvals and platform-based products. While focused initially on cell and gene therapy for rare diseases, the FDA expects the approach may eventually apply to other drugs and conditions with unmet medical needs.
This new pathway likely reduces costs compared to traditional large-scale clinical trials for rare diseases, although the personalized nature and specialized manufacturing of the therapies may still make seeking FDA approval expensive. While there are no guarantees, the new plausible mechanism pathway is sure to lead to more approvals for personalized therapies in rare genetic disorders. --- Shane P. Riley
| | |
“Biometric technologies were originally designed to improve security and streamline authentication, but they’re often misused in ways most people don’t notice. “
Why this is important: Biometrics are everything that makes you who you are. Biometric technologies include facial recognition, fingerprint and eye scans, and they can even identify a people by the way they walk. Biometric technologies are used to improve security and streamline authentication. But just like any technology, they have weaknesses that can be exploited. This includes biometric spoofing, which is when someone imitates your biometrics to fool a system. This includes using AI to generate pictures to trick facial recognition or AI-generated voices to mimic a person’s voice. Bad actors even use silicon or gelatin to replicate fingerprints. All of this sounds like it is straight out of a movie, but it is being used today to gain access to organizations’ systems to steal money or information. Governments around the world are trying to combat this problem. In the U.S., the proposed American Privacy Rights Act would mandate revocable biometric templates and very low false acceptance rates for high-assurance systems. However, the U.S. has a poor track record for passing federal privacy statutes. Organizations can protect themselves from biometric spoofing by adopting facial liveness recognition, multifactor authentication, regular software updates, employee education, and the implementation of AI and machine learning. Security technology is not static, and neither are the attacks against it, so organizations have to remain up-to-date on current threats and how to counter them. --- Alexander L. Turner
| | |
“Manufacturing and energy firms saw some of the biggest increases in malware activity targeting connected devices.”
Why this is important: In previous issues of Decoded, we have reported on the seemingly endless wave of cyberattacks on the country’s critical infrastructure. This article reports on the “escalating volume of cyberattacks on Android devices” in the manufacturing, healthcare, and energy sectors. The proliferation of mobile devices being used in work has led to new potential vulnerabilities. The number of reported attacks on the transportation, government, and construction sectors is also up. Manufacturing, finance, healthcare, and education sectors heavily use Internet-of-Things devices, which creates another attack surface for threat actors. These attackers are also incentivized to target manufacturers because a successful attack at that level can disrupt an entire supply chain and economic sector. The punchline to this article is that attacks are increasing across the board in the country’s critical infrastructure, and companies should implement top-shelf protections and remain vigilant against attacks. --- Nicholas P. Mooney II
| | |
“The publishing powerhouse disclosed that the threat actor used stolen credentials to breach Slack accounts after an employee’s computer was infected with an info-stealer malware.”
Why this is important: This data breach demonstrates how no company is immune to the threat of hackers, even the largest, and the growing sophistication of the methods used to steal personal information.
Nikkei is the world’s largest business news outlet, based in Tokyo, Japan, owning over 40 affiliates, including The Financial Times. It has more than 3.7 million paid subscribers and over 1.7 million daily readers.
In this case, the threat actor used stolen credentials to breach employee Slack accounts. An employee’s computer had been infected with an information-stealing malware. This is a type of malicious software designed to covertly collect sensitive data from a victim’s device and send it to a remote server controlled by the threat actor. The stolen information is then used for identity theft or financial gain.
Upon learning of the data breach, Nikkei responded by implementing immediate security measures and requiring mandatory password resets across the board. However, the breach had already leaked names, email addresses, and chat histories of its customers. Approximately 17,368 employees and business partners were impacted.
This breach perfectly illustrates how today’s attacks progress – starting with a compromised endpoint and quickly pivoting into a high-value SaaS environment. The initial malware was just a foothold that opened the door for the attacker to steal valid credentials and blend seamlessly into normal business activities, appearing as legitimate employees. --- Suzanne Y. Pierce
| | |
“Construction attorneys share how contractors can protect themselves in the high-demand, fast-track environment.”
Why this is important: Construction in the data center sector is booming in the U.S., with large investments (for example, Apple’s $20 billion commitment in Pennsylvania) and major players getting involved. On the surface, a data center structure may resemble a warehouse, but the contract risk is significant. These projects demand sophisticated power, utility, and technology systems; many are delivered under EPC (engineering, procurement, construction) or turnkey models; and they are typically on fast-track schedules that put a heavy burden on contractors.
Construction lawyers engaged in a data center project, therefore, need to focus on contract language in this niche area to protect contractors. Because delivery schedules are compressed and the technical/regulatory demands are elevated, clauses addressing scope changes, supply chain disruption, material cost escalation (e.g., tariff exposure), specialized labor/immigration compliance, and safety/OSHA risk in high voltage, fast build environments are critical. As such, failure to ensure proper risk-allocation and protections in these areas could lead to disputes or cost overruns in this rapidly expanding segment of construction. --- Jonathan A. Deasy
| | |
This is an attorney advertisement. Your receipt and/or use of this material does not constitute or create an attorney-client relationship between you and Spilman Thomas & Battle, PLLC or any attorney associated with the firm. This e-mail publication is distributed with the understanding that the author, publisher and distributor are not rendering legal or other professional advice on specific facts or matters and, accordingly, assume no liability whatsoever in connection with its use.
Responsible Attorney: Michael J. Basile, 800-967-8251
| | | | |