View as Webpage

Volume 7, Issue 1

Welcome


Welcome to our first issue of 2026 of Decoded -- our technology law insights e-newsletter. As we embark on our seventh year of publishing Decoded, we hope you have found the content interesting and insightful. If you have suggestions or recommendations as we move into 2026, please let us know.


2026 National Labor & Employment Law Symposium, February 1, Steamboat Springs, Colorado


For those of you interested in labor and employment law topics, please join this exclusive gathering of top national and international labor and employment lawyers for the latest legal updates in a close-knit, collegial atmosphere. More than a dozen sessions, in a roundtable format, will cover cutting-edge labor and employment topics. In between sessions, participants will have plenty of time to enjoy skiing in Steamboat Springs or networking over drinks or dinner. Click here to learn more.

 


2026 Smart Business Dealmakers Conference, February 19, Pittsburgh, Pennsylvania


What are CEOs saying about today’s climate for dealmaking? Join Spilman on February 19th at the 2026 Smart Business Dealmakers Conference to hear firsthand insights from Pittsburgh’s top business leaders. Dealmakers gathers hundreds of local CEOs, investors, lenders and service providers so you can stay on top of M&A trends. Don’t miss this high-level conversation on the front lines of buying, selling and scaling.

 

Register with promo code SPILMAN250 for $250 off. Click here for tickets.


As always, thank you for reading.


Nicholas P. Mooney II, Co-Editor of Decoded; Chair of Spilman's Technology Practice Group; Co-Chair of the Cybersecurity & Data Protection Practice Group; and Co-Chair of the Artificial Intelligence Law Practice Group


and


Alexander L. Turner, Co-Editor of Decoded and Co-Chair of the Cybersecurity & Data Protection Practice Group

Cyber Risk in 2026: What Executives Must Know About AI, Fraud, Geopolitics and More

“The report, released ahead of the Forum's Annual Meeting in Davos, Switzerland, draws on a survey of C-Suite executives and industry experts, identifying the various trends influencing the cyberspace.”

 

Why this is important: The WEF is holding its annual meeting this week in Davos, Switzerland. In anticipation of its meeting this week, the WEF published its list of key cybersecurity trends for 2026. These trends include:

 

  • AI will supercharge the cyber arms race. Because of technological advances, AI will be used to both attack organizations’ digital infrastructure and to defend against those attacks.
  • Geopolitics is a defining feature of cybersecurity. The majority of organizations are anticipating an increase in geopolitically motivated cyberattacks. In response to this, organizations are focusing on increased threat intelligence and increased engagement with government agencies. However, confidence in national cyber preparedness is declining.
  • Cyber-enabled fraud is threatening business and households alike. The WEF found that almost three-quarters of respondents to its Global Cybersecurity Outlook survey stated that they or someone in their network had been personally impacted by cyber-enabled fraud in 2025. 

 

These trends exist because there are (1) rapidly evolving threats and emerging technologies; (2) third-party and supply chain vulnerabilities; and (3) cyber skills and expertise shortages. Additionally, the growing dependency on a limited number of digital providers remains a concern because it amplifies concentration risk across the economy. Because these threats impact all facets of an organization, all levels of the organization, and not just IT, must make cybersecurity a priority to not only protect the organization itself, but the economy at large. --- Alexander L. Turner

How CIOs can Brace for AI-Fueled Cyberthreats

“Executives are carefully tracking the rise in AI use for cyberthreats, bolstering basic preparedness tactics and increasing cyber spend in response.”

 

Why this is important: This CIO Dive piece is a timely reminder that AI is not creating entirely new cyber threats. It is making familiar ones faster, higher-volume, and far more convincing, especially phishing and business email compromise. The article points to survey data showing that most security leaders see AI-driven attacks as a major risk, and it highlights that many organizations are responding with increased cybersecurity investment and renewed executive attention.

 

The practical takeaway is straightforward: tighten the basics and tighten them everywhere. Strong multifactor authentication, clear approval workflows (including out-of-band verification for high-risk requests), solid password hygiene, and user awareness remain the controls that prevent real damage. For firms handling sensitive client data, this is a good prompt to validate that these fundamentals are consistently in place, tested, and treated as an enterprise-wide responsibility, not only an IT function. --- James E. Dunlap, Chief Information Officer, Spilman

FDA Clarifies Oversight of AI Health Software and Wearables, Limiting Regulation of Low-Risk Devices

“The FDA clarified that many low-risk AI-enabled software tools and consumer wearables fall outside medical device regulation when clinicians can independently review the device’s clinical recommendations.”

 

Why this is important: In January 2026, the FDA issued nonbinding guidance clarifying regulatory oversight of certain low-risk digital health products, including AI-enabled software and wearable devices. The guidance reiterates that general wellness products, such as fitness trackers and health apps that pose minimal risk and promote healthy lifestyles, are typically subject to enforcement discretion and do not require FDA regulation. It also clarifies that clinical decision support software falls outside FDA medical device oversight when it provides recommendations that clinicians can independently review and are not the sole basis for clinical decisions.

 

Taken together, the guidance signals a risk-based regulatory approach under which many AI tools and consumer wearables that influence health behavior or support clinicians, but do not make autonomous or unreviewable medical decisions, may avoid premarket review and other regulatory requirements. High-risk products that diagnose, treat, or prevent disease remain fully regulated. This contributes to broader FDA efforts to modernize digital health oversight, including a 2025 pilot program with CMS to evaluate digital health tools using real-world data, while emphasizing that the guidance reflects current policy thinking rather than creating new legal obligations. --- Shane P. Riley

Lawsuit Accuses Google AI Assistant of Surreptitiously Accessing Gmail and Messaging Files

“Prior to October, users of Google services had to manually allow Gemini to access the private contents of Gmail, Chat and Meet.”

 

Why this is important: A class action lawsuit claims that Google covertly enabled its AI assistant, Gemini, to track its users’ private communications in their Gmail, Chat, and Meet accounts without the users’ knowledge or consent. The deceptive conduct alleged in the lawsuit hinges on the fact that Google previously allowed users to opt-in to this feature—meaning they had to actively, explicitly agree—whereas the feature is now enabled by default, requiring users to go into their settings to disable it. The choice between opt-in and opt-out consent frameworks carries significant legal and practical implications and is a key consideration in privacy law. Some regulations mandate an opt-in approach for certain data. For example, it’s an essential component to GDPR compliance. California—where the lawsuit was filed—has the most comprehensive privacy laws in the U.S., and the plaintiffs allege that Google’s shift to opt-out consent violates laws such as the California Invasion of Privacy Act (CIPA) and the Stored Communications Act.

 

While it is uncertain whether Google is using the private communications as training data, the lawsuit argues that it at least has the capacity to do so. Google’s privacy statements do not categorically exclude such data from AI training, and the Gemini privacy policy is unclear about what user data might be passively logged. The opacity of training data has been a principal issue throughout the rise of advanced large language models (LLMs). AI chatbots have been found to leak hard-coded secrets, regurgitate sensitive training material, and produce nonsensical or malicious output. Such opacity has led many organizations to ban the use of AI assistants in the workplace, given the uncertainty around data security.

 

Ultimately, this lawsuit underscores a familiar principle in privacy law: meaningful consent is not merely about disclosure, but rather user control. As AI systems become more deeply embedded in everyday communications, courts and regulators will continue to scrutinize whether default-enabled data collection crosses the line from innovation into unlawful surveillance. --- Alison M. Sacriponte

The Data Center Rush in Appalachia

“According to a September 2025 report from the Energy & Manufacturing in Appalachia initiative, approximately 92 gigawatts of data center capacity are currently in the pipeline across the United States, with seven gigawatts being added monthly by the end of 2024.”

 

Why this is important: The U.S. Department of Energy projects that data centers could consume between 6.7 and 12 percent of total U.S. electricity by 2028. This growth requires new power generation. With the 300 traditional data center hubs becoming saturated in Northern Virginia, data center developers and tech companies are pushing for growth in the depopulated communities where coalfields used to populate in Kentucky, West Virginia, and Virginia. This growing industry could have an impact on tax revenue, the job market, and the environment. --- Taiesha K. Morgan

Physicians Turning to AI for Clinical Support, not Just Paperwork, Athenahealth Survey Finds

“Physicians report rising comfort with artificial intelligence as a chart reviewer and clinical assistant, but data gaps persist.”

 

Why this is important: AI has significantly integrated into our healthcare system. A recent survey by athenaInstitute finds that the use of AI is expanding in healthcare as it now supports clinical decisions during patient care. The survey, fielded by Sago Health, polled 501 physicians and practice administrators across the United States and found that the majority of those polled are using AI to assist in quickly accessing a patient’s clinical information and test results, and relying on the technology to catch details across patient records. A critical inflection point has been reached with the discussion of AI in healthcare shifting from “will it be adopted” to “how will it be adopted.” Physicians are certainly seeing the benefit of efficient automated information synthesis, which can help improve care delivery and patient outcomes. Other benefits identified include faster creation of patient care plans, reduced billing and coding errors, and sophisticated analysis of patterns resulting in increased confidence with clinical decisions, among others. Several concerns voiced by some of the physicians polled included data barriers from fragmented or outdated information systems, rendering the AI less efficient and/or helpful, and the lack of human connection. --- Jennifer A. Baker

Research: Conventional Cybersecurity Won’t Protect Your AI

“Legacy defenses, designed for rule-based software, cannot safeguard gen AI systems that learn and adapt from data.”

 

Why this is important: If you are using AI in your organization, you cannot rely on traditional cybersecurity practices to protect your data. Researchers recently discovered that bad actors can exploit AI, like Microsoft 365 Copilot, and obtain confidential information without the need for user interaction. What this means is that current cybersecurity models are not designed to protect an organization’s AI tools. This is because AI learns not just from the information users input into it, but from various other data streams, which create vulnerabilities that traditional cybersecurity measures are not prepared to defend against. These covert attacks include:

 

  • Data poisoning - The deliberate corruption of an AI’s training data allows attackers to insert false or biased information into an AI’s training data, skewing outcomes in ways that may remain invisible until decisions turn catastrophic;
  • Adversarial prompts - Another rising threat, these are prompts that trick models to violate their own boundaries, whether by leaking confidential case details from a legal AI or generating malicious outputs; and
  • Model inversion attacks - These occur when hackers extract sensitive training data or even reconstruct proprietary algorithms.

 

The issue with AI security is not an application problem, but instead an infrastructure and supply chain problem. Consequently, in order to protect your AI systems, you must change your focus to hardening the underlying architecture and managerial processes that are the basis for AI. You need to change your focus because traditional cybersecurity tools cannot keep pace with the data-driven and complex AI workloads. Ultimately, because of the lack of specialized talent and secure infrastructure, the best way to protect your AI systems is to, ironically, use AI to protect your AI system. --- Alexander L. Turner

PA Businesses are All in on AI

“With $90 billion in investments on the horizon, the commonwealth is positioning itself as a leader in artificial intelligence.”

 

Why this is important: Artificial intelligence is rapidly transforming the global economy and Pennsylvania is exceptionally well-suited to lead this transformation within the United States. Pennsylvania is projected to attract more than $90 billion in AI-related investment in the coming years, fueled by commitments from major technology companies such as Amazon, Google, and Anthropic. Business leaders argue that PA’s combination of abundant energy and water resources, extensive physical infrastructure, and world-class academic institutions gives it a competitive advantage that few other states can match, enabling both large-scale AI infrastructure development and sustained innovation.

 

AI adoption is already producing measurable gains across multiple sectors of the Pennsylvania economy. In the legal field, firms are using advanced AI tools to summarize records, compare documents, and analyze large data sets, delivering significant efficiency gains for clients, particularly in the insurance sector. Financial institutions, including statewide credit unions, are employing AI to detect and prevent transactional fraud, enhance cybersecurity, and respond more quickly to increasingly sophisticated threats driven by generative AI itself.

 

The healthcare sector is also an area where AI could have a transformative impact. Leaders at Jefferson Health describe how AI-driven tools are being used to reduce administrative burdens on clinicians, reclaim millions of hours of clinical time, and improve both documentation accuracy and job satisfaction. By using ambient AI technologies to automatically capture and structure patient-provider conversations, healthcare providers can focus more directly on patient care while simultaneously improving system efficiency and workforce morale.

 

Across all industries, workforce training, education, and early adoption are vital to fully realize AI’s benefits. Statewide initiatives, including partnerships between the Pennsylvania Chamber of Business and Industry and companies like Google, aim to equip small businesses and workers with practical AI skills. Regardless of age or profession, individuals and organizations that resist learning and integrating AI risk falling behind in an economy increasingly defined by artificial intelligence. --- Shane P. Riley

AI in the Doctor’s Office: How Standards Can Support Trustworthiness

“If AI will work in the medical field (or any other field it's used in), we need to develop specific and useful standards.”

 

Why this is important: As a time-saver, many doctors are using AI transcription services to transcribe information during patient encounters, which automatically enters the transcribed data into the patient’s medical chart. AI chatbots are also expected to become more commonplace as a means to respond to basic medical questions asked by patients. With the advancement of AI in the healthcare industry, Ram D. Sriram, Chief of the Software and Systems Division for the National Institute of Standards and Technology’s Information Technology Laboratory (ITL), argues the need for AI standards on reliability and trustworthiness. 


In order to be useful, information generated by AI will need to be correct, rendering it trustworthy. In addition, the datasets AI tools use for information must be reliable. Standards will be critical to evaluating AI tools in order to create more reliable and trustworthy output. The work at the National Institute of Standards and Technology (NIST) will help influence voluntary AI standards in the healthcare industry, which will help bolster innovation, not hinder it. An example of AI’s benefits can be taken from the use of stem cell treatment for macular degeneration. A patient can grow his or her own cells to serve as stem cell implants to preserve a patient’s vision impacted by age-related macular degeneration. During the manufacturing process, these living cells undergo transformations, increasing the risk to the patient. However, AI technology is being used to predict which cells will work best for a patient, thus minimizing that risk. 


Mr. Sriram expresses his belief that AI will help doctors, not replace them. He argues that the technology will augment intelligence and can make access to healthcare more readily available. While acknowledging the risks associated with AI in the healthcare industry, Mr. Sriram argues that those risks can be managed with the right framework. NIST is working to help appropriately consider and manage those risks with their free tool found here. --- Jennifer A. Baker

X Share This Email
LinkedIn Share This Email

This is an attorney advertisement. Your receipt and/or use of this material does not constitute or create an attorney-client relationship between you and Spilman Thomas & Battle, PLLC or any attorney associated with the firm. This e-mail publication is distributed with the understanding that the author, publisher and distributor are not rendering legal or other professional advice on specific facts or matters and, accordingly, assume no liability whatsoever in connection with its use.



Responsible Attorney: Michael J. Basile, 800-967-8251