|
Welcome
Welcome to our ninth issue of 2025 of Decoded - our technology law insights e-newsletter.
For those of you interested in labor and employment issues regarding the law, you are cordially invited to join us on October 24 in Winston-Salem, North Carolina for our annual SuperVision Labor & Employment Symposium - Working Hard or Hardly Compliant: Strategies, Solutions and Surprises in the New World of Labor and Employment Law. This complimentary symposium is tailored for business owners, C-suite executives, HR professionals, and anyone who manages employees. Dive into a day of valuable insights on employment topics such as HR impacts from this administration, employee leave and accommodation, employee relations, data privacy and cybersecurity, restrictive covenants, and much more. Learn strategies and solutions to tackle the ever-changing world of labor and employment law! Click here to learn more and register.
We hope you enjoy this issue and thank you for reading.
Nicholas P. Mooney II, Co-Editor of Decoded; Chair of Spilman's Technology Practice Group; Co-Chair of the Cybersecurity & Data Protection Practice Group; and Co-Chair of the Artificial Intelligence Law Practice Group
and
Alexander L. Turner, Co-Editor of Decoded and Co-Chair of the Cybersecurity & Data Protection Practice Group
| | |
“The proposed settlement is a result of a class action lawsuit, Bartz v. Anthropic, from last year, in which authors filed a complaint against Anthropic for using copyrighted materials to train its large language models (LLMs), including Claude.”
Why this is important: The Bartz v. Anthropic settlement represents a watershed moment in AI copyright litigation that will reshape how the industry approaches training data acquisition. As the first major copyright lawsuit against an AI company to reach settlement, it establishes important precedents for dozens of similar pending cases.
Judge Alsup's nuanced ruling created a crucial distinction for future litigation. While finding that Anthropic's use of copyrighted books for training constituted fair use, he ruled that acquiring over 7 million works from pirate sites clearly infringed authors' rights. This tells AI companies they can likely defend training on copyrighted materials, but only through legitimate acquisition channels.
The proposed $1.5 billion settlement, at approximately $3,000 per work for around 500,000 books, could establish an industry benchmark. However, Judge Alsup expressed serious concerns about the settlement's structure, particularly the unclear claims processes and incomplete work identification. His demand for comprehensive material lists reflects broader concerns that class action settlements often leave affected parties inadequately informed.
The successful class certification represents a significant procedural victory, as aggregating individual copyright claims was expected to be a major hurdle. This development likely encourages comprehensive class actions over individual lawsuits, potentially increasing settlement values and legal exposure for AI companies.
Strategically, this case validates the controversial "build first, litigate later" approach major AI companies adopted, but demonstrates substantial financial consequences. While the settlement is manageable for Anthropic's scale and doesn't require fundamental model changes, only destruction of improperly obtained materials, it will likely encourage companies to invest more heavily in legitimate licensing arrangements upfront. The case provides a clear industry roadmap suggesting training on copyrighted works may be defensible under fair use, but acquisition methods remain crucial to avoiding infringement claims. --- Shane P. Riley
| | |
“While facial recognition technology is unregulated at the federal level, 23 states have now passed or expanded laws to restrict the mass scraping of biometric data, according to the National Conference of State Legislatures.”
Why this is important: Your face is a key component of your identity. It may also be the key to unlocking your phone. Your face is important biometric data that tech companies are using to create facial recognition search engines for tracking purposes, or to train AI. Unlike places such as the EU, the U.S. government does not regulate the collection, storage, and usage of people’s biometric information on a national basis. To fill this gap, 23 states have passed or expanded their privacy laws to protect against the mass scraping of people’s biometric data, including data used for face recognition programs. States that recently passed legislation to regulate the collection of biometric data used to create facial recognition technology are Texas, Colorado, and Oregon. Most states that have privacy protections related to facial recognition technology require a person to give their consent, or otherwise execute an opt-in to the collection of their biometric information. With the advent of more advanced AI technology, the use of facial recognition in apps and cell phones has increased in recent years. These AI tools can be utilized as part of a digital tracking system, and many of the recently passed or updated privacy laws are intended to be a defense against the ubiquitous digital tracking in our everyday lives. Violations of these biometric privacy laws often result in lawsuits with significant settlements, including Google and Meta recently paying the state of Texas $1.4 billion in relation to their violations of the Texas biometric privacy statute. If you are developing or implementing facial recognition software, it is imperative that you are familiar with all the state biometric privacy laws in order to ensure compliance and avoid expensive litigation. --- Alexander L. Turner
| | |
“The study examined 950 AI medical devices authorized by the Food and Drug Administration through November 2024.”
Why this is important: Artificial Intelligence is being incorporated into every avenue and process across various industries to the extent that AI is “checking” AI. In the medical device field, artificial intelligence-enabled medical devices (AIMDs) are being developed and obtaining approval from the Food and Drug Administration (FDA), several through the FDA 510(k) process.
FDA 510(k) clearance is a process by which the U.S. Food and Drug Administration determines a new medical device is "substantially equivalent" in safety and effectiveness to an already legally marketed device, called a predicate device. Manufacturers of certain Class 1 (low risk) and Class 2 (moderate risk) devices must submit a 510(k) premarket notification to gain this clearance before marketing their product in the United States. The 510(k) clearance pathway generally does not require manufacturers to provide clinical trial data.
Because 510(k) clearance does not require prospective human testing, many AIMDs enter the market with limited or no clinical evaluation. However, some of the devices are associated with recalls related to diagnostic, measurement, delay, or loss issues. To be fair, other non-AI-enabled medical devices also fail; however, most of those issues are due to manufacturing errors, wherein the device does not function at all. On the other hand, AIMDs have been recalled for errors in diagnosis, maybe not a hallucination, but still not the correct result. This leads to patients losing confidence in clinicians and clinicians losing faith in their tools.
The greatest threat is on the horizon – when AIMDs and other AI-enabled clinical tools replace or severely reduce human intervention during critical parts of the treatment process and are not corrected or verified by anything other than another AI system. Advancements in technology are, at times, beneficial, but gaps should be closed as swiftly as they appear, given the unique trust that is being given to AI-enabled technology, so that the solution does not become the problem. --- Sophia L. Hines
| | |
“Yet healthcare is ‘below average’ in its adoption of AI compared to other industries, according to the World Economic Forum's white paper, The Future of AI-Enabled Health: Leading the Way.”
Why this is important: With billions of people without access to essential healthcare and with a health worker shortage in the millions, AI could help bridge the gap. However, the healthcare industry is below average in its adoption of AI compared to other industries, according to the World Economic Forum's white paper, The Future of AI-Enabled Health: Leading the Way. The article focuses on the ways in which AI is already making a difference in the medical field. First, AI software is capable of interpreting brain scans with twice the accuracy as medical professionals and can even identify the timescale within which a stroke occurred. Second, AI can identify bone fractures with more accuracy than medical professionals, which could reduce the need for follow-up appointments. Third, AI can provide greater objectivity than paramedics when determining which patients need to be taken to a hospital for additional care. Fourth, AI can detect the presence of more than 1,000 diseases even before a patient is symptomatic, which allows medical providers to treat conditions at an earlier stage in their development. Fifth, clinical chatbots may be able to speed up informed, swift medical decisions by being able to process the medical information faster and more completely. Sixth, AI can give global access to digitized medical texts, enhancing traditional, complementary and integrative medicine. Finally, AI can do much to alleviate some of the administrative burdens of clinicians, such as listening to and creating patient notes, thus freeing up more of their time for patients. While the healthcare industry is below average in its integration of AI technology, the article cautions that AI’s advancement in transforming healthcare can truly only occur if time is spent on creating accurate AI models and through the regulation of AI tools. --- Jennifer A. Baker
| | |
“More than a quarter of construction teams still rely on tools like Excel and PDFs.”
Why this is important: AI data centers are in high demand. In turn, that means architecture, engineering, construction, and operations industry professionals are in high demand to get these projects off the ground. But here’s the catch: The professionals tasked with building the AI infrastructure to power the future are stuck in the past, according to this article and the underlying white paper describing survey results that show that about 27 percent of architecture, engineering, construction, and operations industry professionals “still rely on email, spreadsheets and PDFs as their primary digital tools.”
According to the article, digital tools like email, Excel spreadsheets, and PDFs are not robust enough to support the successful completion of large-scale, complex projects like AI data centers. That puts those projects at risk of falling behind schedule and increases the likelihood that mistakes will be made. Both scenarios put architecture, engineering, construction, and operations industry professionals in danger of costly litigation if they fail to deliver on these projects for their clients. That’s why it is important for industry professionals to consider how they can safely and securely update their internal technology to meet the intense demands of the AI data center construction boom. --- Jamie L. Martines
| | |
“Pa. trails only Texas and Virginia in announced AI buildout.”
Why this is important: Pennsylvania is rapidly becoming a leading destination for artificial intelligence and data center development, ranking just behind Texas and Virginia in announced projects. Major companies such as Microsoft and Amazon are investing heavily in the state, bringing promises of jobs and infrastructure improvements. At the same time, the expansion is creating significant challenges, particularly for the power grid and electricity consumers.
Data centers consume large amounts of electricity, and their growth has already contributed to higher costs in regional power markets. PJM, the state’s grid operator, reported a record $16 billion in a recent capacity auction, partly due to the added demand from new facilities. Consumers are feeling the impact, with utility rates rising 5 to 12 percent across much of Pennsylvania and as much as 40 percent in some areas.
Lawmakers are beginning to explore policy responses. Senator Katie Muth has introduced legislation that would require high-load users to pay for transmission upgrades instead of shifting costs to ratepayers. Other proposals focus on streamlining permits, updating zoning, and offering tax incentives. Governor Josh Shapiro’s “Lightning Plan,” which emphasizes faster permitting and clean energy investment, could also indirectly support the sector.
Progress is complicated by political gridlock. Republicans control the state Senate, while Democrats hold the House and the Governor’s office, making consensus on energy policy difficult. Environmental advocates warn that Pennsylvania has a history of prioritizing industry growth without fully considering impacts such as water use, emissions, or local siting.
In the meantime, regulatory bodies may play a larger role. The Public Utility Commission is reviewing a tariff that would require data centers to cover their own transmission costs, while PJM is considering rules that would push large users to bring additional power resources online.
The state faces both opportunity and risk: AI-related investment could drive economic growth, but without clear policies, it may also increase costs for consumers and lock in fossil fuel dependence. --- Shane P. Riley
| | |
“Agentic AI tools, like any other AI-powered tools, come with HIPAA compliance considerations to ensure that PHI is protected.”
Why this is important: AI is becoming ubiquitous in all industries, including healthcare. AI’s adoption in the healthcare industry enables better patient results and increased profits. However, when adopting AI in relation to providing healthcare, HIPAA compliance must be a top priority. Many healthcare providers are using agentic AI, which assists with patient interaction, including everything from checking in with patients to analyzing patients with chronic conditions to determine whether further physician review is required. However, recent studies have found that the use of AI as a diagnostic tool has lowered physicians’ ability to make proper diagnoses. By delegating diagnosis to AI, doctors are out of practice when making even routine medical determinations.
With the use of AI, healthcare providers are adding third-party risk to the security of their patients’ protected health information (PHI). The increased risk is attributed to the exponential increase in business associates that may be involved in operating and maintaining the AI systems. This is especially worrisome because a recent survey found that almost half of healthcare providers had a data breach or cyberattack involving a third-party access to their patients’ PHI. However, the utilization of AI does not require healthcare providers to reinvent the wheel when it comes to HIPAA compliance; it just adds additional complexity. To navigate the increase in HIPAA compliance complexity inherent to the utilization of AI, healthcare providers should create a data inventory. This allows the healthcare provider to know how the data the AI is utilizing is consumed, processed, stored, and which vendors along that line touch the data. When contracting with an AI vendor, the healthcare provider should know how its data is being used in relation to the training of the AI, whether the AI system is open or closed, and what happens to that data when the contract ends. If you would like help implementing AI tools in your health care practice, please contact a member of Spilman’s Health Care Practice Group. --- Alexander L. Turner
| | |
This is an attorney advertisement. Your receipt and/or use of this material does not constitute or create an attorney-client relationship between you and Spilman Thomas & Battle, PLLC or any attorney associated with the firm. This e-mail publication is distributed with the understanding that the author, publisher and distributor are not rendering legal or other professional advice on specific facts or matters and, accordingly, assume no liability whatsoever in connection with its use.
Responsible Attorney: Michael J. Basile, 800-967-8251
| | | | |