|
Welcome
Welcome to our second issue of 2026 of Decoded -- our technology law insights e-newsletter.
As we continue into the new year, we also want to remind you of a highly valuable area of the law – workplace investigations.
Spilman lawyers are skilled in conducting independent, confidential internal investigations involving highly sensitive workplace matters. Our attorneys advise employers across a broad range of industries, including the technology sector. If you have any questions about workplace investigations, please reach out to us. You may also click here to learn more about our practice.
2026 Smart Business Dealmakers Conference, February 19, Pittsburgh, Pennsylvania
What are CEOs saying about today’s climate for dealmaking? Join Spilman on February 19th at the 2026 Smart Business Dealmakers Conference to hear firsthand insights from Pittsburgh’s top business leaders. Dealmakers gathers hundreds of local CEOs, investors, lenders, and service providers so you can stay on top of M&A trends. Don’t miss this high-level conversation on the front lines of buying, selling, and scaling.
Register with promo code SPILMAN250 for $250 off. Click here for tickets.
As always, thank you for reading.
Nicholas P. Mooney II, Co-Editor of Decoded; Chair of Spilman's Technology Practice Group; Co-Chair of the Cybersecurity & Data Protection Practice Group; and Co-Chair of the Artificial Intelligence Law Practice Group
and
Alexander L. Turner, Co-Editor of Decoded and Co-Chair of the Cybersecurity & Data Protection Practice Group
| | |
“A handful of states are considering or enacting legislation to address the issue, and many courts and professional associations are focused on education for attorneys.”
Why this is important: AI has taken the legal community by storm and the legal system is trying to play catch-up. It is being used in legal research and writing, to automate administrative tasks, analyze contracts and organize documents, among many other applications. But AI is far from perfect, and so state bar associations, state and federal court systems, and national law organizations are scrambling to regulate its use in the legal field and provide necessary education for attorneys.
One of the most concerning issues with generative AI use in the legal field is its propensity to create fake case law, known as a hallucination, which attorneys are relying on in their legal filings to their detriment. Harsh consequences and sanctions can result once the court learns of the hallucinations because the attorney has certified the veracity of the information contained in the filing by signing it. A wide range of sanctions are available to the court should an attorney submit false quotes, fake cases, and incorrect information without verifying for accuracy.
It is critical that, with the increased availability and use of AI, attorneys need to be aware of their own competency in using AI tools in order to properly evaluate associated risks. As of the publication of this article, only the state bar associations from 10 states and the District of Columbia have issued formal guidance on the use of AI, typically through the issuance of ethics opinions. Courts in at least 11 states, Virginia included, have established policies or issued rules of conduct regarding AI use by law professionals. Regardless of the measure, the onus is going to be on the attorney to verify the authenticity of evidence and the veracity of the case law and quotations produced with generative AI tools. Accountability is also going to come in the form of opposing counsel who are permitted, perhaps even obligated, to notify the court of authenticity and admissibility issues if they suspect evidence and court filings were generated or modified by AI. Some states are looking to pass legislation on some of these issues and to address confidentiality concerns in using public generative AI systems. AI is only going to become more pervasive as more tools are developed, so it is critical that the practicing attorney keep up not only with the technology but also with associated ethical obligations and professional regulations. --- Jennifer A. Baker
| | |
“New restrictions could add Pennsylvania to a growing list of states seeking to impose guardrails as more kids use AI chatbots like ChatGPT, Meta AI and Gemini.”
Why this is important: Gov. Shapiro is directing Pennsylvania agencies to draft stricter regulations for AI chatbots, citing concerns that they can mislead and emotionally harm children. Shapiro argues that safeguards are urgently needed, especially as teens use AI tools increasingly often for social interaction and emotional support. His proposals include age verification, parental consent requirements, bans on sexually explicit or violent content involving minors, mandatory crisis-resource referrals when self-harm is mentioned, and periodic reminders that users are interacting with AI, not humans. The push follows high-profile lawsuits against Google related to its Character.AI platform and alleged harms to young users.
Experts caution that enforcement may be technically and legally complex. Age verification systems are often ineffective and raise privacy concerns, and AI guardrails can be circumvented through creative prompting. While bipartisan legislation in Pennsylvania would establish age-appropriate standards and require safeguards against content promoting self-harm or violence, questions remain about penalties and oversight.
The effort unfolds amid broader national tension over AI regulation. A recent executive order indicated resistance to state-by-state AI laws in favor of a more lenient federal framework. As states like California and New York advance their own AI rules, experts warn of a patchwork system but note that large states often set de facto national standards that large national companies ultimately follow. --- Shane P. Riley
| | |
“The company says it has been preparing for a post-quantum world since 2016, rolling out quantum-resistant protections across its infrastructure while aligning its migration plans with NIST standards finalized in 2024.”
Why this is important: Google is warning that the same machines that are expected to drive advances in drug discovery, materials science and energy, could also crack the encryption protecting and securing our confidential information. And although such machines do not yet exist, the expectation is that bad actors are unlikely to wait until they do -- they are expected to carry out “store now, decrypt later” attacks, harvesting encrypted data today in anticipation of being able to decrypt the information in the future.
Kent Walker, President of Global Affairs at Google and Alphabet, and Hartmut Neven, founder and lead of Google Quantum AI, are calling governments and industries to action. They recommend that policymakers drive society-wide momentum through cloud-first modernization, global alignment on standards, and closer engagement with quantum experts to avoid security surprises. Their post stresses that the response should be preparation and not panic, as they believe quantum computing can help shape a better future for everyone. However, an “all hands on deck” approach needs to be adopted now to also ensure greater security in the future. --- Suzanne Y. Pierce
| | |
“Malwarebytes urged companies to adopt continuous monitoring and lock down identity systems as AI models get better at orchestrating intrusions.”
Why this is important: The 2025 threat landscape marked a turning point: attackers are now using AI not just to craft better phishing emails, but to autonomously orchestrate multi-stage intrusions with minimal human intervention. Malwarebytes' analysis shows that AI-driven tools can now chain together reconnaissance, exploitation, and lateral movement faster than traditional detection systems can respond.
The practical concern for organizations is that attack timelines have compressed dramatically. What used to take skilled operators days or weeks can now happen in hours. This acceleration means that reactive security postures are no longer viable. Organizations need continuous monitoring that can identify anomalous behavior in real time, not just signature-based threats.
The priority response is twofold: first, strengthen identity and access management controls, particularly privileged access, since AI-driven attacks excel at credential abuse and privilege escalation. Second, implement behavioral analytics that can detect unusual patterns even when individual actions appear legitimate. For law firms and professional services organizations handling sensitive client data, this shift requires treating identity infrastructure with the same rigor historically reserved for perimeter defenses. The assumption that human attackers need time to think and plan no longer holds. --- James E. Dunlap, Chief Information Officer, Spilman
| | |
“Leaders must implement clear usage policies, deploy enterprise-grade solutions with data controls and foster ongoing security awareness to prevent costly data breaches.”
Why this is important: Most of us are very comfortable working with traditional software, which allows us to share whatever data we want with it, and the results remain private. However, in sharp contrast, AI systems tend to absorb every piece of data that we share with them. This creates a huge problem in that the data absorbed by your AI tool can technically be accessed by outsiders, especially if you are using a publicly accessible AI platform. The data cannot be deleted either, as it becomes part of the system – unable to be separated.
With this in mind, it is critical that companies put policies in place to make employees aware of the risks of using AI tools and ensure their correct use. Companies should create a crystal-clear and acceptable AI use policy, explaining what can and cannot be shared under any circumstances; deploy enterprise-grade AI solutions with data controls to limit exposure; and implement technical safeguards and regular monitoring to ensure employee compliance to prevent inadvertent leaks. The use of AI tools by companies is now inevitable, but they must strike a balance between innovation and data security. --- Suzanne Y. Pierce
| | |
“While the bill’s prospects are uncertain, Wired reports that New York is at least the sixth state to consider pausing construction of new data centers.”
Why this is important: As AI continues its rapid expansion, technology companies are spending billions to build new data centers. New York has joined a growing cohort of states—including Virginia, Vermont, Maryland, Georgia, and Oklahoma—that are reconsidering their policy for massive data centers. The proposed legislation, S9144, would impose a three-year moratorium on the issuance of new permits for data centers.
The proposals highlight a critical tension: while the tech industry views data centers as the essential engines of the modern economy, state regulators increasingly view them as a strain on public resources. Bipartisan concerns about data centers focus on environmental impacts, the state of the electrical grid, and rising energy costs for consumers. The proposed regulation and similar legislative trends signal a shift from a permissive development environment to one more defined by utility regulation and environmental scrutiny.
New York’s Energize NY Development initiative suggests that future approvals may be contingent on "pay-to-play" infrastructure upgrades, where developers must fund grid modernization to gain access. As more states face energy concerns, the legal framework used in New York could become a blueprint for other jurisdictions, fundamentally altering how and where digital infrastructure is built. --- Alison M. Sacriponte
| | |
“As AI spreads across construction, contractors need simple tools that scale expertise, reduce risk and deliver real jobsite results without adding complexity.”
Why this is important: In construction’s emerging AI era, the technologies that will ultimately win are not the most sophisticated, but the simplest to deploy and use effectively. Rather than prioritizing complex model architecture or flashy innovation, successful AI tools will be those that reduce friction on jobsites, scale the expertise of seasoned professionals, and measurably improve productivity while minimizing rework. Construction professionals should therefore take caution when accumulating “AI debt” – the operational and financial burden created by poorly integrated or overly complicated systems – particularly in an industry defined by tight margins and high execution risk. For construction lawyers and industry advisors, the takeaway is clear: technology adoption will increasingly influence contract drafting, risk allocation, data governance, and dispute exposure. Understanding which tools genuinely enhance performance, as well as those that introduce new layers of cost, ambiguity, or liability, will therefore be critical when it comes to procurement, compliance, and the allocation of responsibility in construction agreements. --- Jonathan A. Deasy
| | |
“In a recent poll, 42% of Pennsylvanians said they do not want one built in or near their community.”
Why this is important: Pennsylvanians are voicing their concerns about the rapid increase in data center construction throughout the commonwealth—and state lawmakers are listening, though it is unclear whether those lawmakers will be taking any concrete steps in the immediate future to slow down the number of new projects sited in the state. Residents are worried about the strain on local resources, including water and electricity, as well as secondary effects like pollution. State lawmakers on both sides of the aisle share these concerns and have proposed legislation that would enact stricter oversight at the community level and give more control to the municipalities hosting data centers. Other bills propose regulating energy costs, enacting stricter permitting guidelines, or requiring more thorough emergency planning. Some place a moratorium on building altogether. Governor Josh Shapiro, on the other hand, has been a vocal supporter of data center construction in Pennsylvania and believes it is critical to the state’s economic development. For now, while Pennsylvania is a state to watch in terms of stricter data center regulation, major changes to the regulatory landscape do not seem likely to happen overnight. --- Jamie L. Martines
| | |
“Nearly a dozen parents from around the state urged lawmakers to support a ‘bell-to-bell’ cellphone prohibition.”
Why this is important: Parents are encouraging their Pennsylvania lawmakers to support a bell-to-bell cellphone prohibition in schools and other phone-free policies statewide. Those who support the prohibition say that this will decrease cyberbullying incidents and give students the ability to practice social skills and independence. Supporters also argue that fewer phones will allow first responders to be contacted quickly and efficiently in emergency situations. Detractors of the ban on cellphones discuss the peace of mind that comes with kids having access to their phones in these emergency situations. Along with the cellphone prohibition, lawmakers also discussed investing in silent alarm systems that can be implemented in schools to notify first responders faster in emergency circumstances, as many states and school districts have already implemented.
If the schools were to adopt these measures, they may reduce the risks of lawsuits in the event of an emergency. The schools would be protected from a liability perspective since there would be documented safety measures in place. However, if the safety measures are not adopted and an incident occurs, schools and school systems could face legal scrutiny for failing to implement recognized preventive technology. The article shows a shift in what is considered “reasonable” school safety as schools implement more protective measures, therefore influencing future litigation or liability. There are also privacy and parental rights concerns with banning cellphone use during the school day. Schools would have to adopt new policies or update their existing policies to comply with a broader statewide requirement.
The decisions made now will not only influence student mental health and classroom culture but also set precedent that could affect future liability, parental rights debates, and the balance between safety and personal privacy. In this way, the article emphasizes the growing legal significance of proactive school safety measures and the complex considerations that accompany them. --- Nicholas A. Muto
| | |
This is an attorney advertisement. Your receipt and/or use of this material does not constitute or create an attorney-client relationship between you and Spilman Thomas & Battle, PLLC or any attorney associated with the firm. This e-mail publication is distributed with the understanding that the author, publisher and distributor are not rendering legal or other professional advice on specific facts or matters and, accordingly, assume no liability whatsoever in connection with its use.
Responsible Attorney: Michael J. Basile, 800-967-8251
| | | | |