Boundaries in the Making: Courts, Lawyers, and the Negotiation of Legitimate AI Use in Brazil

My current research examines how Brazilian courts regulate the use of generative AI by legal practitioners in the absence of specific binding rules. Conducted as part of my PostDoc at the University of Hamburg, with a fellowship at the Max Planck Institute for Social Anthropology within the project „Complexity as a Topic of Law,“ the study draws on Science and Technology Studies frameworks—particularly the concepts of boundary work, co-production, and non-human agency.
Brazil constitutes a paradigmatic case for investigating this phenomenon. The country ranks among global leaders in GenAI adoption, has the highest density of lawyers per capita in the world, and operates under a significant regulatory vacuum around the use of algorithmic tools by legal practitioners. This conjunction of high technological penetration, hypertrophy of the legal profession, and absence of specific regulation positions Brazil as a privileged site for observing how the mass adoption of generalist GenAI reshapes legal praxis and professional boundaries.
The research began with a systematic analysis of judicial decisions sanctioning AI misuse in legal filings. What emerged was a striking pattern: fabricated case law—citations to precedents that simply do not exist—constitutes the paradigmatic violation. Courts have become, in effect, fact-checkers of their own jurisprudential archives, verifying whether the rulings invoked before them are real. In responding to these cases, judges deploy varying conceptions of what AI is—a mere tool, a fallible assistant, a quasi-subject that „hallucinates“—each carrying distinct implications for how responsibility is distributed between human and machine. Across these decisions, verification emerges as the central professional duty, positioning the lawyer as an epistemic gatekeeper through whom information must pass before acquiring legal validity. Rather than prohibiting AI, courts project a future of conditional incorporation: GenAI is welcome, but only under human supervision.
Yet judicial decisions capture only part of the story. The research moves beyond the formal record to engage directly with the actors themselves—magistrates and lawyers—through in-depth interviews. How do judges actually detect AI-generated content? Is it systematic verification or intuition? How do lawyers integrate these tools into their daily work, and what informal rules guide their practice? What does it feel like to discover that a filing cites a decision you never wrote? These conversations reveal the gap between official positions and lived experience—the epistemic burdens, the evolving professional identities, the anxieties and adaptations that formal decisions leave unsaid.
The study also extends into the digital spaces where legal professionals congregate—LinkedIn groups, WhatsApp networks, online forums. Here, informal norms circulate horizontally: warnings about what not to do, tips on how to use AI „safely,“ shared stories of sanctions and close calls. This collective negotiation of boundaries operates alongside and sometimes in tension with institutional pronouncements, revealing how professional communities process technological disruption in real time.
By triangulating these perspectives—institutional, experiential, and collective—the research offers a multidimensional account of how law manages sociotechnical complexity when formal rules are absent. It contributes empirically by mapping boundary work in a large legal community navigating generalist AI; theoretically by applying STS concepts to understand how law processes algorithmic opacity; and practically by identifying tensions that point toward the need for thoughtful regulatory intervention.