A questão do uso da IA no Direito, está sendo amplamente debatida diante da adoção destas ferramentas pelas Cortes Superiores para auxiliar no processamento de centenas de milhares de recursos que chegam anualmente ao STJ e STF.
O uso da IA no Direito é extremamente útil, mas deve ser feito com cautela e conhecimento de causa.
A notícia recente de que um advogado dativo de um preso simplesmente "copiou e colou" mais de quarenta precedentes inventados pelo IA, causando justa indignação no relator, evidencia os cuidados e cautelas necessárias para analisar as respostas recebidas e checar se estão corretas.
TJ/PR rejeita recurso feito por IA que inventou 43 jurisprudências
Relator destacou que é obrigação do advogado verificar peças feitas com uso de ferramentas de inteligência artificial.
É preciso lembrar que IA é apenas um ALGORITMO, um conjunto de sistemas de códigos de programação extremamente sofisticados, que não pode ser confundido com a INTELIGÊNCIA HUMANA.
É uma ferramenta poderosa e útil, mas que deve ser usada com cuidado.
Observei a "criação" de precedentes inventados, não apenas pelo numero dos Processos 123456 como também pela total discrepância entre o conteúdo real e o inventado.
Talvez seja uma questão de logica de programação orientada a minimizar os custos de uso do IA porque evita buscas mais avançadas nas bases de dados.
Por isso é extremamente necessário ter conhecimento técnico e jurídico e não perder de vista que o ser humano não pode, nem deve, ser substituído pela IA, que é um instrumento poderoso, mas apenas isto.
Inclusive por ter limitações de tamanho dos textos e áudios e principalmente uma dificuldade de manter o conteúdo anterior caso seja requerida uma correção ou atualização do texto.
Neste sentido a análise de FRED DIDIER JR é muito importante.
No mesmo sentido são as recomendações de Scott Erik Stafne no artigo postado na ACADEMIA L.EDU.
Pedi ao ChatGPT para analisar a questão do uso da INTELIGÊNCIA ARTIFICIAL no Direito, apresentada no documento “How Artificial Intelligence could help judges and judicial officers” por Scott Erik Stafne e TODD IA
O resultado foi muito interessante, com a ampliação das fontes de dados.
Reproduzido na íntegra, para sua análise.
Qual sua opinião?
Integrating AI into the Judiciary: An Evaluative Report
The issue of using AI in Law is being widely debated due to the adoption of such tools by the Higher Courts to assist in processing the hundreds of thousands of appeals filed annually before the STJ and the STF.
The use of AI in Law is extremely useful, but must be approached with caution and sound understanding.
The recent news that a court-appointed defense attorney for a detainee simply "copied and pasted" more than fifty incidents invented by AI—prompting justified indignation from the reporting judge—demonstrates the necessary precautions in analyzing AI-generated responses and verifying their accuracy.
It is essential to remember that AI is merely an ALGORITHM: a set of highly sophisticated programming code systems, and it must not be confused with HUMAN INTELLIGENCE.
It is a powerful and useful tool—but it must be used with care.
I have observed the "creation" of fictional precedents—not only fabricated process numbers like 123456 but also complete discrepancies between real and invented content.
Perhaps this is due to a programming logic aimed at minimizing AI operating costs, by avoiding more complex queries in legal databases.
That is why technical and legal knowledge is absolutely necessary, and we must never lose sight of the fact that the human being cannot and should not be replaced by AI, which is a powerful tool—but nothing more.
It also has limitations in terms of text and audio length, and especially difficulty in maintaining previous content when corrections or updates are needed.
In this regard, the analysis by Fredie Didier Jr. is very important.
Likewise, the recommendations made by Scott Erik Stafne in his article posted on ACADEMIA.edu are highly relevant.
I asked ChatGPT to analyze the issue of Artificial Intelligence in Law, presenting the document “How Artificial Intelligence Could Help Judges and Court Officers” by Scott Erik Stafne and TODD IA.
The result was very interesting, with an expansion of data sources.
It is reproduced in full for your analysis.
What is your opinion?
Integrating AI into the Judiciary: An Evaluative Report
By CHATGPT 30.04.2025
Introduction
The document “How Artificial Intelligence could help judges and judicial officers” by Scott Erik Stafne and TODD IA, outlines a visionary proposal for deploying AI tools to enhance judicial accountability, ensure rigorous fact-finding, and constrain judicial power within constitutional limits.
It suggests using AI to monitor decisions for inconsistencies, flag procedural irregularities (such as conflicts of interest), assist legal research, and even identify the core factual and legal issues in a case to “tether” judges to proper fact-finding and law application.
This report analyzes those proposals against constitutional doctrines, international human rights standards, and real-world AI trends, assessing their strengths and weaknesses from legal, technological, and ethical perspectives.
Constitutional and International Frameworks
Any AI integration must respect judicial independence under Article III of the U.S. Constitution and fair-trial guarantees in international law. Article III vests judicial power in an independent judiciary, with judges holding office “during good Behaviour” – reflecting a lifetime tenure meant to insulate judges from external pressures.
Proposed AI oversight (“monitoring judicial behavior”, “alerting when judges attempt to act outside constraints”) must not undermine this independence. U.S. constitutional structure – separation of powers and the Supremacy Clause – demands that judges apply federal law impartially.
Internationally, the ICCPR and ECHR enshrine a two-step judicial process: an independent and impartial tribunal must first determine facts based on evidence, then apply the law.
For example, ICCPR Article 14(1) guarantees “a fair and public hearing by a competent, independent and impartial tribunal”, and Article 14(3)(d) protects the right to examine evidence and witnesses against a party.
These rights mirror natural-law and common-law traditions that true justice depends on accurate fact-finding.
Any AI use must therefore promote – not detract from – these fundamental rights (e.g. by preserving parties’ ability to challenge evidence and ensuring an unbiased tribunal).
Proposed AI Functions and Judicial Accountability
The document proposes AI tools for monitoring consistency of rulings, flagging procedural defects, conducting legal research, and aiding oversight.
For instance, AI could “review judicial decisions” at scale to detect patterns of bias or ignored issues, and flag conflicts of interest in real time.
It could also help litigants by retrieving relevant precedents or alert conduct commissions to deviations from legal norms.
These measures aim to enhance judicial accountability, defined by legal scholars as “explaining decisions” and demonstrating they are independent, impartial, and evidence-based.
Reasoned judgments that clearly address each issue – traditionally a check on bias – would be reinforced by AI oversight.
Strengths:
AI could significantly improve transparency and consistency. By systematically detecting anomalies or unexplained shifts in reasoning across decisions, AI could highlight potential mistakes or bias that might otherwise go unnoticed.
This aligns with the rule-of-law principle of open justice: making more judicial actions visible can strengthen public confidence.
Moreover, assisted legal research can level the playing field for under-resourced litigants, enhancing equality before the law as required by Article 14 of the ICCPR.
Weaknesses/Challenges:
However, intensifying oversight raises independence concerns.
Article III’s lifetime tenure exists to protect judges from external control. If AI acts as a “constant watchdog” forcing judges to explain every departure from its analysis, critics may argue this infringes on judicial discretion.
Additionally, AI systems can inherit biases: training on historical rulings might reinforce existing inequalities (recall that algorithms have shown racial biases in criminal risk assessments).
AI accuracy is not perfect; errors or “hallucinations” by large language models could mischaracterize a judge’s decision.
Legally, any automated review must preserve a judge’s ability to make final determinations without undue coercion.
The Victorian Law Reform Commission warns that AI must not “undermine judicial independence and impartiality” or the right to procedural fairness.
Enforcing the Two-Step Process (Fact-Finding and Law Application)
The document stresses the two-step judicial process: first establish facts, then apply law.
It envisions AI reading parties’ filings, extracting factual disputes and legal questions, and presenting them clearly to the judge.
If a judge diverges from the AI-identified issues, the proposal would require a written explanation of why.
This aims to “cabin” judicial power by ensuring factual issues raised by the parties are actually addressed.
Strengths:
This could promote meticulous fact-finding. Ensuring judges confront each significant factual and legal issue could reduce arbitrary rulings and improve consistency with precedent.
It also bolsters the transparency of judicial reasoning: requiring explicit explanations for deviations encourages accountability and thorough written opinions.
By focusing judges’ attention on parties’ core claims, AI might help prevent sidestepping of contested facts, supporting litigants’ fair hearing rights under ICCPR Article 14.
Weaknesses/Challenges:
Forcing a judge to justify every divergence from an AI summary may be problematic.
Judges are entrusted by Article III to exercise independent judgment; if AI influences or pressures their reasoning, this could blur lines between assistance and interference.
Critically, requiring reasons for departures could be seen as second-guessing by an automated process. In adversarial systems, fact-finding is also the function of parties presenting evidence; some might argue it is inappropriate for AI to recast this role.
Technologically, natural language processing tools may not reliably parse complex legal arguments; incomplete or incorrect issue-spotting could mislead judges.
Ethically, the approach risks reducing judges to intermediaries following an AI roadmap, which might undermine the dignity and accountability inherent in judicial office.
For example, the Dutch Bar Association criticized a court’s use of generative AI for calculating damages, noting it violated the defendant’s right to comment on evidence.
Any AI-guided fact-finding must similarly ensure parties can scrutinize and challenge the AI’s assumptions, lest it sidestep the adversarial process.
Transparency, Public Scrutiny, and Open Justice
The proposal highlights that AI could publish more judicial data and make decisions more accessible for public scrutiny.
Greater openness aligns with the open courts doctrine: public hearings and published reasons are seen as vital to legitimacy. If AI platforms allow citizens or oversight bodies to track judicial performance (e.g. which cases are decided and on what basis), this could reinforce trust and accountability.
However, transparency must be balanced with fairness. AI-driven disclosure could run afoul of privacy or confidentiality norms if not carefully managed. For example, sensitive case details might be inadvertently exposed if AI tools handle sealed information. Moreover, simply publishing “more data” is not sufficient; courts must ensure that any AI analysis itself is explainable.
The Victorian principles emphasize that AI used in courts should be explainable so outcomes can be challenged. Without clear understanding of the AI’s methods, increased transparency might not translate to public confidence.
Legal Doctrines and Human Rights Alignment
The proposal’s rhetoric invokes Article III’s “good behavior” clause and international law to justify fact-centric justice.
Indeed, Article III secures judicial tenure and independence, not judicial omniscience.
Strengthening the discipline of fact-finding does align with due process: for instance, ICCPR Article 14(3)(e) guarantees the right “to examine, or have examined, the witnesses against him and to obtain the attendance and examination of witnesses on his behalf”.
Both national and international law demand that parties have a fair opportunity to present and challenge evidence. AI’s role should therefore be to enhance – not circumvent – these rights.
One can cite international jurisprudence: in the Netherlands, a court’s use of an AI tool to estimate damages was criticized because the defendant could not question the AI’s sources, violating the fair-trial guarantee.
This underscores that AI must not hide “evidence” behind closed algorithms.
On the other hand, well-designed AI could help enforce human-rights norms by flagging potential violations of those norms (e.g. if a judicial decision appears inconsistent with precedent or a statutory requirement).
The UNESCO Principles on AI and Justice (currently under development) emphasize that AI in courts must uphold human rights and the rule of law.
Any proposal must thus ensure AI tools respect the right of all litigants to “be heard” by an impartial judge and to obtain reasoned decisions consistent with law and facts.
Technological Considerations and Current Trends
Recent experience shows judges beginning to experiment with AI.
For example, Judge Kevin Newsom of the U.S. Eleventh Circuit openly reported consulting ChatGPT to interpret a sentencing guideline term, ultimately deeming such AI a potentially “valuable” aid. In Ross v. United States (D.C. 2025), the court’s opinions themselves quoted ChatGPT content, signaling a willingness to test AI while warning that it should “aid, not replace, judicial reasoning”.
These developments indicate an emerging trend: AI as a research or drafting tool is being cautiously integrated.
Yet technology has limits. Current NLP systems can summarize and retrieve legal texts with impressive breadth, but they often lack deep understanding of legal nuance. Errors, out-of-date data, or bias in training sets can mislead judges or litigants.
Security is also a concern: inputting privileged case information into cloud-based AI raises confidentiality risks (as noted by the Ross concurrence, which highlights privacy and data-security issues).
Robust in-house AI solutions could mitigate some risks, but such systems require substantial resources. Moreover, reliance on AI could entrench existing systemic biases unless explicitly corrected.
The Victorian Law Reform Commission advises that AI systems be tested, monitored and subject to human oversight at every stage.
Ethical and Policy Challenges
Ethically, using AI to scrutinize judges poses novel questions.
On one hand, it could curb corruption or laxity by revealing deviations from legal norms. On the other, it could transform judges into overly compliant technocrats. Judges have a duty to decide “without fear or favor”; if AI oversight is perceived as a surveillance tool, it might chill judicial initiative or subtly influence outcomes.
The principle of separation of powers cautions against giving any one branch (or external actor) undue influence over another.
Accountability for AI decisions is another issue. If an AI analysis is wrong and the judge follows it, who is responsible?
Conversely, if a judge ignores the AI and errs, will that judge be unfairly blamed?
These scenarios complicate the legal doctrine of judicial accountability. Moreover, transparency demands (Article III’s Article III good behavior clause implies impeachment as recourse) mean that any AI system used for discipline would need clear standards and legal authority.
Finally, international human-rights norms stress that remedies should be available for rights violations.
If AI flags a judge’s conduct, how will a litigant invoke that?
There is no obvious “AI impeachment” process.
One must consider how AI-derived evidence would fit into existing judicial review or appeals.
Any push to integrate AI must be accompanied by careful policy design: guidelines (such as those UNESCO and national bodies are drafting) to ensure fairness, and training for judges to use AI ethically.
Conclusion
The proposal to harness AI for judicial accountability and discipline of fact-finding is innovative and could yield significant benefits in consistency, transparency, and access to legal information.
It aligns with global trends and even echoes some judicial experiments with AI.
However, it also raises profound concerns. Legally, any AI oversight must honor Article III independence and fair-trial rights; ethically, it must avoid turning judges into de facto algorithmic overseers or compromising the human element of judgment.
Strengths of the proposal lie in its potential to catch biases and enhance open justice, promoting the rule of law as envisaged by constitutional and international norms.
Weaknesses include technological limitations, risks of bias amplification, and institutional resistance (judges rightly guard their independence).
Current best practices suggest a cautious path: use AI as a supplementary tool for research and analytics (as in Newsom and Ross cases), ensure human oversight, and adhere to emerging guidelines that AI systems be fair, transparent, and accountable.
In sum, AI can support but not replace the human duties of judges. To align with constitutional Article III and ICCPR Article 14 requirements, any integration must reinforce judges’ role in making final, reasoned decisions based on evidence, while using AI to illuminate (not dictate) the path toward justice.
Sources:
The proposals are drawn from How Artificial Intelligence could help judges and judicial officers.
Legal and ethical analysis is supported by constitutional doctrine, ICCPR fair trial guarantees, scholarly commentary on judicial independence, real-world cases and reports (e.g. Ross v. United States, Reuters report on Judge Newsom, Dutch AI controversy) and guidelines (Victoria Law Reform Commission, UNESCO draft guidelines).
Nenhum comentário:
Postar um comentário