- within Technology topic(s)
- with Senior Company Executives, HR and Inhouse Counsel
- with readers working within the Retail & Leisure and Law Firm industries
Seyfarth Synopsis: Pro se plaintiffs are filing more ADA Title III and FHA complaints using AI tools that enable harassing litigation tactics.
One of the trends we did not predict at the beginning of this year was how AI tools such as Copilot, Gemini, and ChatGPT would change the landscape of lawsuits and claims brought under Title III of the Americans with Disabilities Act (ADA) and the Fair Housing Act (FHA). After seeing our clients hit by an unusually high number of pro se complaints and lawsuits that appear to involve the use of AI tools, we decided to take a look at the numbers.
Turns out, there have been 40% more federal pro se ADA Title III lawsuits filed in 2025 than 2024, based on a comparison of average monthly numbers. Federal pro se FHA lawsuits jumped by a whopping 69% during this same period. According to our LexMachina search, pro se plaintiffs filed 1,774 federal lawsuits alleging ADA Title III violations for all of 2024, compared to 1,867 in the first nine months of 2025. Pro se plaintiffs filed 421 federal lawsuits alleging FHA violations in all of 2024, compared to 531 in the first nine months of 2025. These numbers do not include complaints filed in state court or before administrative agencies where most fair housing grievances are brought.
Most pro se litigants we encounter are using AI tools to help them litigate. The tell-tale signs of such use include the citation of non-existent cases (with parentheticals, no less), descriptions of case holdings that are completely wrong, substantive briefs "written" in less time than it would take anyone to type the document, and work product that does not match the plaintiff's spoken English skills.
NBC recently reported that many litigants are utilizing ChatGPT to bring lawsuits instead of hiring counsel. And while some may say this is a positive development for the private enforcement of the ADA and FHA, there are also adverse consequences. Unconstrained by rules of professional ethics or the fear of being disbarred, pro se litigants have been known to file briefs with fake cases and bombard defendants with frivolous accusations, demands or motions. We've seen pro se plaintiffs generate briefs to oppose routine extension and pro hac vice motions that opposing counsels would rarely oppose. These actions drive up defense costs substantially, and create more work for judges that must intervene to stop the bad behavior.
Some courts have taken action to sanction pro se litigants that have used AI tools improperly, and have dismissed some cases outright for the misuse of such tools. U.S. District Judge Christopher Boyko of the District of Ohio has a standing order banning the use of AI in the preparation of any document filed with the court. We predict more judges will be addressing the abusive use of AI in the future.
What are companies targeted by aggressive pro se plaintiffs to do? While it may be tempting to just pay to make these pro se plaintiffs go away, capitulation will only reward and encourage more bad behavior. Mounting a vigorous defense in meritorious cases — including seeking sanctions for when a pro se plaintiff uses AI tools to mislead the court or harass defendants — may be a better option.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.