Ethical Rules for Using Generative AI in Your Practice

By Stephen J. Herman, Esq.  (Updated June 2, 2024)

At the risk of stating the obvious, we are still in the early days of what we believe to be an “AI Revolution” in the way that goods and services, including legal services, are and will be provided.  Which means that we do not, at this point, have much in the way of formal guidance.[1]  The best we can do is identify potential issues that could be seized upon by our clients, Disciplinary Counsel and/or the Courts as an arguable violation of the ethical and professional standards and rules.  While recognizing, at the same time, that as these services continue to develop very rapidly, a technological advance (or, perhaps, a change in a provider’s Terms of Use or Privacy Policy) could, in a short period of time, either obviate or complicate further some of these potential issues and concerns.  With those caveats, some of the principal questions which have been raised in terms of an attorney’s use of (and, potentially, failure to use) services like ChatGPT and other Generative AI technologies have generally fallen into two broad categories: (a) maintaining a general competence in understanding the risks and the benefits of the technology, and ensuring that the ultimate work product is reliable and consistent with an acceptable legal, ethical and professional standard of care; and (b) ensuring that attorney-client privileged and other legally protected information remains confidential and secure.  With that preface, this paper will examine some of the Professional Rules[2] and other legal requirements that could potentially be implicated by a law firm’s use (or non-use) of ChatGPT or other Generative AI.


Rule 1.1 of the ABA Model Rules of Professional Conduct requires general competence in the representation of a client.  Official Comment [8] to the Rule advises that “a lawyer should keep abreast of changes in the law and its practice, including a reasonable understanding of the benefits and risks associated with relevant technology the lawyer uses to provide services to clients or to store or transmit information related to the representation of a client.” [3]

While most of the focus has centered on the responsibility to understand and account for limitations in the use of ChatGPT and other similar services, some have suggested that the Rule also implies an affirmative duty to use appropriate AI technologies where the benefits outweigh the risks, in terms of cost-savings to the client, and perhaps even quality.

With respect to the risks, many have focused on what are sometimes referred to as “hallucinations”  – i.e. responses to prompts, which, while having all the objective signs of reliability, are factually inaccurate.  As helpfully explained by the Washington DC Bar:

Lawyers should understand that GAI produ cts are not search engines that accurately report hits on existing data in a constantly updated database. The information available to a GAI product is confined to the dataset on which the GAI has been trained. That dataset may be incomplete as to the relevant topic, out of date, or biased in some way. More fundamentally, GAI is not programed to accurately report the content of existing information in its dataset. Instead, GAI is attempting to create new content. In the case of a request for something in writing, GAI uses a statistical process to predict what the next word in the sentence should be. That is what the “generative” in GAI means: the GAI generates something new that has the properties its dataset tells it the user is expecting to see.[4]

In one highly-publicized case, for example, a law firm was sanctioned when the lawyers “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT.” [5]  Notably, the lawyer in that case specifically asked ChatGPT whether the cases he cited from the ChatGPT response were real or fake, and ChatGPT replied that it had supplied “real” decisions that could be found through Westlaw, LexisNexis and the Federal Reporter.[6]

One oft-quoted authority in this area is David Curle, Director of the Technology and Innovation Platform at Thomson Reuters, who advises that:

If lawyers are using tools that might suggest answers to legal questions, they need to understand the capabilities and limitations of the tools, and they must consider the risks and benefits of those answers in the context of the specific case they are working on.[7]

Some have also pointed to Official Comment [5] to the Rule, and suggested that over-reliance on an AI tool for legal research and analysis may violate the professional duty of “inquiry into and analysis of the factual and legal elements of the problem.”[8]

Candor to the Court

Related to the general responsibility to understand and account for any limitations in the technology is the responsibility of candor to the court.  Rule 3.3, in this regard, prohibits lawyers from knowingly:

– making a false statement of fact or law to a tribunal, or failing to correct a false statement of material fact or law previously made to the tribunal by the lawyer;  and/or,

– failing to disclose to the tribunal legal authority in the controlling jurisdiction known to the lawyer to be directly adverse to the position of the client and not disclosed by opposing counsel.[9]

While premised on Rule 11,[10] the sanction of the lawyers in Mata, supra, was premised largely on the law firm’s refusal to correct the record after the lawyers became aware of the fact that the citations provided by ChatGPT did not exist.[11]

It has also been noted, with respect to sub-section (a)(2) of the Rule, that a lawyer may not be able to know whether there is adverse legal authority in the jurisdiction if he or she relies too much on the way that an AI service responds to a particular prompt, especially where the prompt is only seeking support for the client’s position.

Supervision of Associates and Non-Lawyer Assistance

Rules 5.1 and 5.3 place an affirmative duty on a supervising attorney to undertake reasonable efforts to ensure that associates, paralegals and other staff working under their direction conform to the ethical and professional obligations of the attorney.

In this regard, it is likely a good idea to establish, periodically review, and enforce internal policies and protocols regarding the use – and/or limitations and restrictions on use – of ChatGPT and other AI products by lawyers and other employees of the firm.  (As well as local counsel or other co-counsel, where appropriate.[12])

At the same time, Rule 5.3 may additionally be interpreted to impose a duty with respect to information generated by the AI product or service itself.[13]


Perhaps the most serious concerns that have been raised regarding the use of ChatGPT and other AI systems surround the security of privileged and other legally protected information.  Under Rule 1.6, an attorney is not only generally prevented from disclosing “information relating to the representation of a client,” but is also charged with an affirmative duty to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”[14]

Using ChatGPT to analyze a client’s legal documents that contain privileged or other confidential information can pose a risk that such information could be misused or exposed.[15]  In March of 2023, there was, for example, a data leak at ChatGPT that allowed its users to view the chat history titles of other users.[16] In addition to such potential data breaches, chat history can be accessed and reviewed by ChatGPT or other Generative AI company employees, and may also be provided to third-party vendors and affiliates.[17]

In addition to attorney-client privileged information and/or work product, one also has to be cognizant of other legal protections and requirements which might be applicable to the clients’ information, including:

– HIPAA (Health Insurance Portability and Accountability Act of 1996) [18]

– The European Union’s General Data Protection Regulation (GDPR) [19]

– The California Consumer Privacy Act (CCPA) [20] (and/or other State Privacy Laws)

– Trade Secret Protection (which may be compromised by “disclosure” to the AI service) [21]

– Contractual Non-Disclosure Agreements and Obligations

The Florida Bar Ethics Opinion regarding the use of Generative AI advises that: “Existing ethics opinions relating to cloud computing, electronic storage disposal, remote paralegal services, and metadata have addressed the duties of confidentiality and competence to prior technological innovations and are particularly instructive” and generally conclude that a lawyer should:

Ensure that the provider has an obligation to preserve the confidentiality and security of information, that the obligation is enforceable, and that the provider will notify the lawyer in the event of a breach or service of process requiring the production of client information;

Investigate the provider’s reputation, security measures, and policies, including any limitations on the provider’s liability; and

Determine whether the provider retains information submitted by the lawyer before and after the discontinuation of services or asserts proprietary rights to the information. [22]

In the Terms of Use dated March 14, 2023, OpenAI advised that:

If you use the Services to process personal data, you must provide legally adequate privacy notices and obtain necessary consents for the processing of such data, and you represent to us that you are processing such data in accordance with applicable law. If you will be using the OpenAI API for the processing of “personal data” as defined in the GDPR or “Personal Information” as defined in CCPA, please fill out this form to request to execute our Data Processing Addendum. [23]

The updated Terms of Use, promulgated in November of 2023 and effective as of January 31, 2024, simply state that:

You are responsible for Content, including ensuring that it does not violate any applicable law or these Terms. You represent and warrant that you have all rights, licenses, and permissions needed to provide Input to our Services. [24]

Claude’s Acceptable Use Policy similarly prohibits users from “violating any natural person’s rights, including privacy rights as defined in applicable privacy law” as well as “inappropriately using confidential or personal information.”[25]

Pierce and Goutos, from Gunderson Dettmer, explain and opine that:

Rigorous guardrails must be carefully established to ensure the responsible use of GAI systems. These challenges can be, and are actively being addressed through methods such as employee training, AI governance policies, and the formation of specialized AI task forces.  Legal entities ranging from academic institutions, like MIT, to law firms are actively helping to shape AI governance in the legal profession.  Most recently, the ABA initiated a Task Force on Law and Artificial Intelligence to investigate AI-related biases, potential threats to client confidentiality, and risks concerning privilege waivers. This new task force will also explore AI’s role in expanding access to justice and developing resources for legal professionals.

As these collective initiatives continue to solidify the developing framework for lawyers’ responsible use of AI, it is important to recognize some of the existing countermeasures that can help mitigate risks associated with concerns such as the unauthorized sharing of confidential information.

For instance, OpenAI’s policy modifications in April 2023, which permits the disabling of chat history in ChatGPT, acts as a tangible safeguard against unintended data use.  Further enhancing users’ data control, OpenAI’s August 2023 update introduced an enterprise-focused model which offers enhanced security protocols, sophisticated data analysis, and bespoke customization capabilities.  In parallel, a number of third-party vendors are developing solutions that allow legal professionals to have secure access to enterprise-level instances of OpenAI’s models, while still protecting clients’ confidential information. These and future advancements will aid in safeguarding privileged attorney-client communications and confidential data. The technology and developments in this space continue to evolve rapidly. Based on the current trajectory, we anticipate that a majority of law firms and organizations will adopt custom experiences powered directly into their own applications, as well as prohibit the input of any confidential information into public GAI tools, which will substantially alleviate breach of confidentially concerns. [26]

A lawyer’s affirmative duty to reasonably communicate with his or her client is also implicated in this context.  In particular, Rule 1.4  requires an attorney to “reasonably consult with the client about the means by which the client’s objectives are to be accomplished,” and to explain relevant matters “to the extent reasonably necessary to permit the client to make informed decisions regarding the representation.” [27]  To the extent use of ChatGPT or other AI services in connection with the representation of a client is contemplated, it is therefore important to discuss the potential risks and benefits with the client, so that an informed decision can be made.[28]

Other Potential Issues and Concerns

A number of other potential legal and ethical questions have been raised, including:

Copyright (and Patent) Issues. A number of questions have been raised, including: Can the AI-generated material be copyrighted (and/or patented) by either the user or the owner and operator of the AI?  What happens if the AI-generated material includes content that is subject to an underlying copyright claim?  Is there some other common law or contractual property right in favor of either the owner or the user of the AI? [29]

Rule 1.5.  What fee is “reasonable” in light of the time and skill either saved by using, or wasted by not using, available AI technology?

“Black Box” Concerns. Could either the information submitted to an AI service and/or the “training” of the AI service directly or indirectly benefit a litigant or other party whose interests are adverse to the client for whom the AI service is procured?  (And/or another former or existing client of the firm?) [30]

Rule 1.7(a)(2), 1.8(a) and/or 1.8(b). To the extent that the lawyer or another principal in the law firm might have an ownership or other interest in the AI-related product, service or company.

Unauthorized Practice of Law.  Both in the sense that: (a) Are some of these services that are not owned, maintained, or supervised by an attorney offering what is effectively “legal advice” without a license?  And/or (b) Is an attorney who is hosting, supervising, maintaining or otherwise administering some of these services effectively providing legal advice to clients and/or regarding matters in States where he or she does not maintain a license? [31]

Rule 8.4(g). Given the bias that exists in some of these products and services, might the use of such AI technology result in potential “discrimination on the basis of race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status in conduct related to the practice of law”?



[1] Just since the first iteration of this paper in October 2023, however, some preliminary guidance has started to emerge. For example, the California State Bar issued a Practical Guidance for the Use of Generative Artificial Intelligence (Nov. 16, 2023), the Florida Bar issued Advisory Ethics Opinion No. 24-1 (Jan. 19, 2024), the Supreme Court of New Jersey issued Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (Jan. 24. 2024), and the D.C. Bar issued Ethics Opinion No. 388 (April 2024), which are referenced further herein.  (See also, e.g., Letter from the Louisiana Supreme Court re “The Emergence of Artificial Intelligence” dated January 22, 2024)

[2] Unless otherwise stated, reference is made herein to the ABA Model Rules of Professional Conduct, which have been adopted, in whole or in part, by most States. The reader should consult his or her own State’s controlling ethical or professional rules, statutes and/or code articles to account for any deviations from, or additions to, the relevant Model Rule.

[3] See, e.g., James v. Nat’l Fin. LLC, No.8931, 2014 WL 6845560, 2014 Del.Ch.LEXIS 254 (Del. Chancery Ct. Dec. 5, 2014) (citing Comment [8] to Rule 1.1 and quoting Judith L. Maute, Facing 21st Century Realities, 32 Miss.C.L.Rev. 345, 369 (2013)) (“Deliberate ignorance of technology is inexcusable…. If a lawyer cannot master the technology suitable for that lawyer’s practice, the lawyer should either hire tech-savvy lawyers tasked with responsibility to keep current, or hire an outside technology consultant who understands the practice of law and associated ethical constraints”). See also, e.g., Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (Jan. 24. 2024) (The core ethical responsibilities of lawyers are unchanged by the integration of AI, as was true with the introduction of computers and the internet. While AI does not change the fundamental duties, lawyers must be aware of new applications and potential challenges. As with any disruptive technology, a lack of careful engagement could lead to ethical violations, underscoring the need for lawyers to adapt their practices mindfully).

[4] DC Bar Ethics Opinion No. 388 (April 2024).

[5] Mata v. Avianca, Inc., No.22-1461, 2023 WL 4114965, 2023 U.S.Dist.LEXIS 108263 (S.D.N.Y. June 22, 2023).

[6] Mata v. Avianca, supra, at ¶45.

[7] See, e.g., David Lat, “The Ethical Implications of Artificial Intelligence” Above the Law: Law2020, (available at:, as of Oct. 27, 2023).

[8] The full Comment provides that: “Competent handling of a particular matter includes inquiry into and analysis of the factual and legal elements of the problem, and use of methods and procedures meeting the standards of competent practitioners. It also includes adequate preparation. The required attention and preparation are determined in part by what is at stake; major litigation and complex transactions ordinarily require more extensive treatment than matters of lesser complexity and consequence.”  See also, e.g., ABA Model Rule of Professional Conduct 2.1 (“In representing a client, a lawyer shall exercise independent professional judgment and render candid advice. In rendering advice, a lawyer may refer not only to law but to other considerations such as moral, economic, social and political factors, that may be relevant to the client’s situation”).  Some have also pointed to Official Comment [1] to ABA Model Rule of Professional Conduct 1.3 (“A lawyer must also act with commitment and dedication to the interests of the client and with zeal in advocacy upon the client’s behalf”) with respect to over-reliance on an AI product or service that might only provide a neutral or objective treatment of the law.

[9] ABA Model Rule of Professional Conduct 3.3(a)(1) and (2). See also, e.g., OpenAI Terms of Use, No.2(c)(v) (updated March 14, 2023) (available at:, as of Oct. 27, 2023) (“You may not … represent that output from the Services was human-generated when it is not”).

[10] Federal Rule of Civil Procedure 11(b) provides that: “By presenting to the court a pleading, written motion, or other paper – whether by signing, filing, submitting, or later advocating it – an attorney or unrepresented party certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances: (1) it is not being presented for any improper purpose, such as to harass, cause unnecessary delay, or needlessly increase the cost of litigation; (2) the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law; (3) the factual contentions have evidentiary support or, if specifically so identified, will likely have evidentiary support after a reasonable opportunity for further investigation or discovery; and (4) the denials of factual contentions are warranted on the evidence or, if specifically so identified, are reasonably based on belief or a lack of information.”

[11] Mata v. Avianca, supra, 2023 U.S.Dist.LEXIS 108263 at **2-3 (“if the matter had ended with Respondents coming clean about their actions shortly after they received the defendant’s March 15 brief questioning the existence of the cases, or after they reviewed the Court’s Orders of April 11 and 12 requiring production of the cases, the record now would look quite different. Instead, the individual Respondents doubled down and did not begin to dribble out the truth until May 25, after the Court issued an Order to Show Cause why one of the individual Respondents ought not be sanctioned. For reasons explained and considering the conduct of each individual Respondent separately, the Court finds bad faith on the part of the individual Respondents based upon acts of conscious avoidance and false and misleading statements to the Court”).

[12] See, e.g., Official Comment [1] to ABA Model Rule of Professional Conduct 5.3 (“Paragraph (a) requires lawyers with managerial authority within a law firm to make reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance that nonlawyers in the firm and nonlawyers outside the firm who work on firm matters act in a way compatible with the professional obligations of the lawyer…. Paragraph (b) applies to lawyers who have supervisory authority over such nonlawyers within or outside the firm”) (emphasis supplied).

[13] See, e.g., Natalie A. Pierce and Stephanie L. Goutos, Why Lawyers Must Responsibly Embrace Generative AI (2023) (available at:, as of Oct. 27, 2023) at p.14 (“In 2012, the title of Model Rule 5.3 was updated to make clear the rule encompassed any ‘non-lawyer assistance’, which opens the door for any type of non-lawyer assistance, whether human or not”) (citing Nicole Yamane, Artificial Intelligence in the Legal Field and the Indispensable Human Element Legal Ethics Demands, 33 Geo. J. Legal Ethics 877, 884 (2020) (concluding that, under Rule 5.3, the AI could be considered a “nonlawyer” that is being delegated work by the lawyer, triggering the lawyer’s duty to ensure that the work product produced by the AI program is competent)).

[14] ABA Model Rule of Professional Conduct 1.6(a) and (c).

[15] See, e.g., Mostafa Soliman, Navigating the Ethical and Technical Challenges of ChatGPT, New York State Bar Association (May 17, 2023) (available at:, as of Oct. 27, 2023). See also, e.g., Florida Advisory Opinion 24-1 (Jan. 19, 2024) (“When using a third-party generative AI program, lawyers must sufficiently understand the technology to satisfy their ethical obligations. For generative AI, this specifically includes knowledge of whether the program is ‘self-learning’.  A generative AI that is ‘self-learning’ continues to develop its responses as it receives additional inputs and adds those inputs to its existing parameters. Use of a ‘self-learning’ generative AI raises the possibility that a client’s information may be stored within the program and revealed in response to future inquiries by third parties”).

[16] Andrew Tarantola, OpenAI Says a Bug Leaked Sensitive ChatGPT User Data, Engadget (March 24, 2023) (available at:, as of Oct. 27, 2023).

[17] See generally: Privacy Policy, OpenAI (updated June 23, 2023) (available at:; OpenAI Privacy Request Portal (updated Oct. 26, 2023) (available at:, as of Oct. 27, 2023); Natalie, What Is ChatGPT?, (available at:, as of Oct. 27, 2023); Michael Schade, How your data is used to improve model performance, (available at:, as of Oct. 27, 2023); Johanna C., How to delete your account, (available at:, as of Oct. 27, 2023).

[18] 42 U.S.C. §§ 1320d, et seq., and 45 C.F.R. ¶¶ 164.500, et seq.

[19] Available at: (as of Oct. 27, 2023).

[20] Cal. Civ. Code, §§ 1798.100, et seq.

[21] See, e.g., 18 U.S.C. §1839(3).

[22] Florida Advisory Opinion 24-1 (Jan. 19, 2024).   See also, e.g., California Practical Guidance for the Use of Generative Artificial Intelligence (Nov. 16, 2023) (“A lawyer must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections. A lawyer must anonymize client information and avoid entering details that can be used to identify the client.” Further suggests that a lawyer who intends to use confidential information in a generative AI solution should – including by reviewing the Terms of Use and consulting with an IT professional – “ensure that the provider does not share information with third parties or utilize the information for its own use in any manner, including to train or improve its product”).

[23] OpenAI Terms of Use, No.5(c) (updated March 14, 2023) (available at:, as of Oct. 27, 2023).  See also, e.g., What Is ChatGPT? No.8 (“Please don’t share any sensitive information in your conversations”).

[24] OpenAI Terms of Use, (updated Nov. 14, 2023) (eff. Jan. 31, 2024) (available at:, as of March 30, 2024).

[25] ClaudeAI’s Acceptable Use Policy (available at:, as of March 30, 2024).

[26] Pierce and Goutos, supra, at pp.15-16.

[27] ABA Model Rule of Professional Conduct 1.4(a)(2) and (c).

[28] See, e.g., California Practical Guidance for the Use of Generative Artificial Intelligence (Nov. 16, 2023) (“The lawyer should consider disclosure to their client that they intend to use generative AI in the representation, including how the technology will be used, and the benefits and risks of such use.” In addition: “A lawyer should review any applicable client instructions or guidelines that may restrict or limit the use of generative AI”); Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (Jan. 24. 2024) (The Rules do not impose an affirmative duty on lawyers to tell clients every time that they use AI. However, if a client asks if the lawyer is using AI, or if the client cannot make an informed decision about the representation without knowing that the lawyer is using AI, he or she has an obligation to inform the client of their use of AI. As to client interactions, a lawyer can use AI to explain a matter to the extent reasonably necessary to permit the client to make informed decisions, but must continue to oversee such communications to ensure accuracy); Florida Advisory Opinion 24-1 (Jan. 19, 2024) (“A lawyer should be wary of utilizing an overly welcoming generative AI chatbot that may provide legal advice, fail to immediately identify itself as a chatbot, or fail to include clear and reasonably understandable disclaimers limiting the lawyer’s obligations”).

[29] See, e.g., U.S. Copyright Office, Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Federal Register 16190-16194 (March 16, 2023) (“If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it. For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology – not the human user…. In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that ‘the resulting work as a whole constitutes an original work of authorship.’ Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection. In these cases, copyright will only protect the human-authored aspects of the work, which are ‘independent of’ and do ‘not affect’ the copyright status of the AI-generated material itself”); Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2023) (holding that an AI software system cannot be an “inventor” for purposes of obtaining a patent under the Patent Act); Thaler v. Perlmutter, No.22-1564, 2023 WL 5333236, 2023 U.S.Dist.LEXIS 145823 (D.D.C. Aug. 18, 2023) (rejecting copyright claim by AI owner and operator over a visual work of art autonomously generated by his machine, while noting that: “Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works. The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions regarding how much human input is necessary to qualify the user of an AI system as an ‘author’ of a generated work, the scope of the protection obtained over the resultant image, how to assess the originality of AI-generated works where the systems may have been trained on unknown pre-existing works, how copyright might best be used to incentivize creative works involving AI, and more”). See also, e.g., OpenAI Terms of Use, No.3(a) (updated March 14, 2023) (available at:, as of Oct. 27, 2023) (“As between the parties and to the extent permitted by applicable law, you own all Input. Subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output. This means you can use Content for any purpose, including commercial purposes such as sale or publication, if you comply with these Terms. OpenAI may use Content to provide and maintain the Services, comply with applicable law, and enforce our policies. You are responsible for Content, including for ensuring that it does not violate any applicable law or these Terms”); but see OpenAI Term of Use No.2(c)(v) (“You may not … represent that output from the Services was human-generated when it is not”).

[30] See generally: ABA Model Rules of Professional Conduct 1.6 – 1.9.

[31] See, e.g., ABA Model Rule of Professional Conduct 5.5.  Note that, as per Official Comment [2] to the Rule, “the definition of the practice of law is established by law and varies from one jurisdiction to another.”  See Yamane, supra, 33 Geo. J. Legal Ethics at 887-888 and fn.80-82 (citing Lola v. Skadden Arps, 620 Fed.Appx. 37 (2nd Cir. 2015) (noting that the definition of “practice of law” is primarily a matter of State concern, and holding that a contract lawyer exclusively performing document review “under such tight constraints that he exercised no legal judgment whatsoever” was not engaged in “the practice of law” within the State of North Carolina); Janson v., Inc., 802 F.Supp.2d 1053 (W.D.Mo. 2011) (denying defendant’s motion for summary judgment as to the unauthorized practice of law where “LegalZoom’s internet portal offers consumers not a piece of self-help merchandise, but a legal document service which goes well beyond the role of a notary or public stenographer”); Steven Buse, Disclaim What I Say, Not What I Do: Examining the Ethical Obligations Owed by LegalZoom and Other Online Legal Providers, 37 J. Legal Prof. 323, 323 (2013); Drew Simshaw, Ethical Issues in Robo-Lawyering: The Need for Guidance on Developing and Using Artificial Intelligence in the Practice of Law, 70 Hastings L.J. 173, 178 (2018) (“On the legal self-help front, courts, state legislatures, and bar associations in the near term will have to decide whether increasingly sophisticated services such as DoNotPay constitute the unauthorized practice of law”); Simon, Lindsay, Sosa & Comparato, Lola v. Skadden and the Automation of the Legal Profession, 20 Yale J.L. & Tech. 234, 248 (2018) (“According to the Lola decision, if a lawyer is performing a particular task that can be done by a machine, then that work is not practicing law”)).