
Guideline: Provides guidance to regulated members to support them in the application of Standards of Practice.
December 2025
Introduction
Artificial Intelligence (AI)
While there is no single accepted definition, AI is considered to be the capability of a computer or artificial system to do things that are normally associated with human cognition, such as perceiving, reasoning, learning, problem-solving and self-improvement. Generative AI (GenAI), which uses advanced systems that can produce or generate new content, such as text, audio, and videos in response to user prompts, is of particular interest in healthcare. This guideline primarily addresses the use of GenAI in service provision, although the principles discussed have a broad application to non-generative forms of AI, e.g., voice assistants or transcription tools.
While AI has the potential to improve health system efficiency as well as client and provider experiences, there are important ethical, legal, and practical considerations when integrating AI into professional practice. While this guideline does not set out new professional standards for regulated members, it provides important information on how minimum standards can be met when integrating AI into practice and outlines key professional obligations that must be considered when using AI in service delivery.
Potential Uses of AI in SLP and Audiology
AI includes a wide range of tools with varying degrees of integration and intrusiveness into service provision. GenAI systems in particular have capabilities that allow them to make interpretations and recommendations about client care based on existing data and training. Given these capabilities, AI is rapidly being integrated into healthcare, and it is anticipated that AI use will become more commonplace in the practice of SLPs and audiologists.
There are many current and potential uses of AI to support the clinical and professional practice of SLPs and audiologists, some of which are shown in the table below.
| Use | Examples |
|---|---|
| Administrative | Tools which can automate clinical documentation, manage client billing, scheduling, translation, interpretation, etc. |
| Diagnostic | Image, speech, or other data analyzing tools which can provide interpretation of clinical data. |
| Intervention | Tools which can assist in session planning or preparation of therapy materials or create simulated human-like therapy assistance for virtual care clients. |
| Research | Tools which can review and summarize literature or clinical research datasets. |
Benefits and Risks of AI in Healthcare
There are a number of benefits and risks associated with using AI to support service provision. The decision regarding whether to integrate AI into practice requires careful consideration of benefits, risks, and how risks can be mitigated.
Benefits
There are several potential benefits of incorporating AI into service provision:
- Improved system efficiency through supporting administration and clinical care (e.g., improved speed and accuracy of diagnoses through diagnostic algorithms).
- Reduced health systems cost due to improved efficiency.
- Decreased practitioner workload and increased time for direct client care resulting from the automation or optimization of administrative or preparation tasks.
- Improved practitioner wellness and job satisfaction resulting from reduced administrative workloads.
- Improved service safety and effectiveness (once AI-generated clinical interpretation and decision making can be ensured).
- Providing additional support for clinician learning, when used in conjunction with professional education and peer support (e.g., in the management of rare or unusual diagnoses that are not commonly encountered in practice setting).
- Increased access to personalized care, whereby machine learning and algorithms can be used to tailor care to a client’s unique social profile (e.g., their genetic, behavioral, cultural, and economic profile).
Risks
The negative impacts and potential unintended consequences of using AI must be considered by regulated members using AI, or considering using AI, in their practice. Generally, as an AI system or tool more closely models the practice of SLPs or audiologists, the risks associated with its implementation will increase, which will then increase the level of accountability and risk management required.
The following are some of the risks of using AI in professional practice.
Governance and Evidence Base
While laws and regulations specific to the use of AI in healthcare are being contemplated or developed at various levels of government, there is currently no federal or provincial legislation to guide the use of AI in healthcare. There is currently limited evidence related to the use of AI by SLPs and audiologists which can guide practice, although there may be research currently underway. At the present time, increased uptake of AI in professional practice may be taking place in a context where there is little governance and evidence to support its use.
Privacy and Security
Many AI systems designed for use in healthcare involve the collection, processing, retention, and utilization of client personal and/or health information. Subsequently, there are risks that client information may be accessed by unauthorized users and be misused (e.g., used for purposes other than healthcare) or otherwise handled improperly outside of the client’s consent (e.g., fraudulent use of biometric data such as voice and face by third parties). Client information stored in AI systems may be vulnerable to cybersecurity threats and other breaches. Even without identifying information (e.g., names), a client’s privacy may be exposed by the clinical uniqueness of the case, or their particular circumstance.
Bias
AI systems are vulnerable to bias when the datasets that the systems are built and trained on contain biased information or information that is non-representative of the client. Biased or non-representative data sets can generate, replicate, or amplify biases, resulting in prejudiced, culturally stereotyped, inaccurate, or otherwise inappropriate service delivery interpretations, results, and recommendations for clients. For example, speech recognition AI tools have been shown to have lower accuracy for dialect speakers, while facial recognition technology has persisting issues recognizing Black, Brown, and Asian skin types and facial features. The potential for bias is increased when users are unaware of even the potential for bias when using AI technology.
Human cognitive bias may also occur when using AI systems and tools. This happens when humans trust AI to give information or make decisions to a greater degree than their trust in their own or others’ knowledge and judgement, even if there is no basis for the increased trust in the AI system or tool in question. For example, when a clinician puts trust in the judgement of an AI tool, despite not having knowledge of how the tool operates, or without verifying the accuracy of information generated by the technology.
Client Trust and Transparency
With increased integration of AI systems and tools in service provision, it is important to consider that clients will have varying degrees of understanding, trust in, and desire for the use of AI in their care.
For example, clients may have concerns about the reliability of information generated by AI with respect to their care, that AI may not adequately consider their individual circumstance or needs, or about the levels of empathy afforded to them when AI systems are used in their care. Clients may also vary in their assessment of specific AI tools and have different expectations for different types of tools. For example, a client may agree with the use of AI billing tools but may not approve of the use of AI technology in assisting in the making of diagnoses or in their treatment planning.
When used without client consent or knowledge, or when clients perceive its use as substituting for the healthcare provider they are seeing, AI has the potential to disrupt the client-provider relationship, which is integral to safe and effective care.
Accuracy and Accountability
The data sources, training, and/or processes by which information is analyzed, and outputs are generated may not be clear or made transparent to the user when using an AI system or tool. This creates challenges in validating the accuracy of outputs. In addition, AI systems have been shown to generate output that appears plausible, but which may be inaccurate, incomplete, outdated, biased, unverifiable, or otherwise inappropriate. The current proliferation of healthcare-specific AI systems and tools may make it difficult to know which AI applications can be best relied on for accuracy.
The accuracy of some AI systems in generating outputs is also dependent on the nature and quality of the information input and the prompts provided by the human user. A poor-quality prompt may yield inappropriate or incorrect information for the specific situation, which may have negative impacts on the client. The operational skill required of the user to accurately prompt AI for required outputs may not be apparent to the user.
In healthcare, AI is intended to be used as a tool to support and enhance competency and client care. However, overreliance on AI as a tool may result in insufficient oversight by providers with respect to decision-making and clinical judgement. Unmonitored AI use over time may result in some aspects of client care being unintentionally and inappropriately transferred to AI. The potential for AI to replace, rather than support the clinical judgement of the provider, coupled with concerns about the accuracy of AI outputs, represents a major safety risk for clients and the potential violation of ethical and professional standards.
Other Risks and Potential Unintended Consequences
In addition to the major risks noted above, regulated members should consider other potential unintended consequences of the use of AI in service provision:
- Increased workload: Using AI requires that regulated members develop and maintain basic understandings of any AI systems or tools used in their practice, including how the technology works, and the risks associated with use. Depending on their role, regulated members may also be responsible for the implementation of AI in their workplace, including the responsibility for educating others. The upskilling required to maintain basic knowledge of AI systems and tools, along with the increased accountability required when using AI (e.g., verifying accuracy, proofreading, or editing outputs), may increase regulated members’ workloads.
- Intellectual property violations: where the output generated by AI inappropriately uses or reproduces content without required permissions or rights to do so.
- Undue influence over decision-making and skills degradation over time, e.g., through complacency or overreliance on AI technology.
- Masking competency to practice issues: AI can produce outputs that reflect sound clinical judgement or professional practice. As such, AI use in service delivery may mask lacking capabilities or competencies on the part of the practitioner to provide competent, effective, and efficient services. In this instance, safe and competent care may be due to the AI technology used, as opposed to the competency of the provider. This may undermine the ability of regulatory systems to identify and manage concerns about unprofessional conduct.
- Environmental impacts: due to the significant energy, water, and other resource consumption required for the operation of AI systems and tools.
Workforce equity and diversity impacts: AI is not an appropriate tool for understanding the lived experiences and perspectives of under-represented groups. As such, using AI to generate such perspectives may inadvertently undermine efforts to diversify the professions. For example, an organization may not seek the perspective of someone with lived experience of marginalization for an initiative that requires that type of input if such perspectives are thought to be plausibly ‘generated’ using AI.
Role of the College
Data collected by ACSLPA in 2024 showed that AI use among regulated members was low, and that respondents identified support for AI use as an emerging trend and requested practice guidance in this area (see ACSLPA’s 2024 Registered Member Survey Results at a Glance). However, AI is being rapidly integrated into and transforming healthcare delivery, including for SLPs and audiologists. ACSLPA recognizes that regulated members may be considering using or already using AI to varying degrees in their practice, as a personal choice or due to employer implementation of AI systems and tools. At this time, there is no requirement from the College that regulated members use AI for their work. The College recognizes that regulated members may choose to incorporate AI into their clinical and professional practice or may be required by their employer to implement certain AI tools and systems into their practice.
ACSLPA does not regulate the use of tools, devices, or technologies in the practice of SLPs and audiologists, nor does the College regulate the vendors of AI tools and technologies. The College’s role is to regulate its members who may use AI in their practice. As such, it is not the role of the College to discuss the appropriateness of using AI in professional practice, nor can the College comment on or recommend specific AI systems, tools, or vendors to regulated members. ACSLPA’s mandate as a regulatory College requires it to uphold and enforce existing legislation, regulations, and professional expectations that apply when using AI in practice. This includes the ACSLPA Code of Ethics and Standards of Practice.
The College will closely monitor developments in this area, including the development of federal and provincial legislation and evidence-based practice guidance, and communicate these to regulated members as appropriate. This guideline will be updated as more information becomes available.
Practice Guidance for Using AI in Professional Practice
General Principles
Regulated members using AI in their practice are advised to remain mindful that they must practice in compliance with the minimum practice standards as outlined in the ACSLPA Code of Ethics and Standards of Practice. The following are some general principles that regulated members should consider when implementing AI into their practice.
Strong Rationale for Selection and Implementation
If using AI systems and tools, regulated members must understand and critically assess:
- The technology’s intended purpose and its basic functioning, including its ability to produce valid, reliable, and interpretable information.
- The benefits, risks, and limitations of the technology.
- How reasonable and appropriate risk mitigation strategies can be implemented when using the technology.
- Whether the technology’s intended purpose is appropriate for use in the regulated member’s particular practice setting and scope.
Regulated members are expected to have a strong rationale for the selection and implementation of any AI systems or tools in their practice. Using AI to support clinical and professional practice will require the regulated member to maintain competencies in being able to evaluate AI systems and tools for accuracy and relevance. Regulated members are accountable for violations of the ACSLPA Code of Ethics and Standards of Practice that may flow from the use of AI technology in client care. Regulated members are thus expected to manage and mitigate any harms or unintended consequences that AI technologies can potentially cause.
Protecting Client Interests
Client interests must be considered first and foremost with respect to the integration of AI systems and tools into clinical and professional practice. If considering using AI in their practice, regulated members should reflect on whether it would be in their clients’ best interests to do so, and whether AI use can harm client interests.
Accountability
When using AI, regulated members are responsible and accountable for ensuring service provision meets minimum standards of care and that clients are provided with safe, competent, and ethical care. AI technology is intended to assist and complement care. It is not a replacement for clinical and professional judgement and should not be used as such. At all times, care provided by a regulated member must reflect their own clinical reasoning and professional judgement.
Regulated members should review any AI-generated information for accuracy, relevance, and appropriateness; and use their sound clinical and professional judgment when integrating AI into care. Regulated members should be able to independently explain care plans to clients and others without the use of AI. Regulated members must retain clinical and professional oversight of any AI-generated information to be used in care, similar to the supervision of students or support personnel. The use of AI systems and tools in practice should be regularly monitored and evaluated to ensure that the systems and tools are meeting their intended purposes and supporting client care.
It is recommended that regulated members monitor their own use of AI in their service delivery to prevent overreliance and to be able to recognize if their independent reasoning and judgement becomes jeopardized, or if they begin to transfer oversight and decision making to AI systems and tools used in their practice.
Technical Support
It is also recommended that regulated members ensure that adequate technical support is in place should technical issues arise when using AI, i.e., that they have support for troubleshooting and resolving any technical difficulties encountered. This may include the ability to identify, report, and address concerns about AI technologies with their vendors or operating organizations.
The use of AI does not alter the fundamental obligations regulated members have, some of which are discussed below.
Informed Consent
ACSLPA’s minimum standards regarding informed consent apply to the use of AI in service delivery. AI use must not infringe on the client’s ability to make informed decisions about their care. As such, SLPs and audiologists are required to obtain informed consent from clients prior to using AI in their care, particularly when AI is used in diagnosis or treatment. Regulated members should:
- Be transparent with clients when using AI in their care.
- Disclose how the AI system or tool will be used, including information on how any AI technology used contributed to clinical decisions in the client’s care.
- Explain the risks and benefits of using the AI system or tool, along with any safeguards that have been put into place to manage risks and ensure accuracy, reliability, privacy, and security.
- Inform clients of their right to refuse, withdraw, or modify consent for the use of AI in the care at any time.
- Have suitable alternatives for AI should the client refuse or withdraw consent for the use of AI in their care. In the case of client refusal of the use of AI systems and tools in their care, regulated members must be able to independently perform or complete the task or activity that would have been executed by AI.
ACSLPA regulated members are advised to review Standard of Practice 2.3 Informed Consent and the Informed Consent for Service guideline for information on ACSLPA’s minimum standards on obtaining informed consent.
Privacy & Confidentiality
If using AI systems and tools in service delivery, it is important for regulated members to ensure that minimum standards for privacy and confidentiality of client information are met. Any AI systems or tools utilized should allow for clients’ privacy and confidentiality to be maintained at all times in accordance with existing legislation and regulations. Regulated members will need to review the privacy legislation specific to their particular work setting to ensure any AI systems and tools used are in compliance with the relevant legislation and note whether the legislation applicable to them requires a privacy impact assessment for the use of AI.
Regulated members should familiarize themselves with the security policies and protocols of any AI technologies used to ensure they are in compliance with Standard of Practice 2.2 Privacy/Confidentiality and Standard of Practice 4.3 Documentation and Information Management.
Continuing Competence
Under Standard of Practice 3.1 Continuing Competence, regulated members are required to acquire and/or enhance their competence in new areas of practice; and to limit their practice until necessary competencies have been obtained, if lacking the competencies to provide safe and competent services. Regulated members have a wider ethical duty to maintain a level of skill, knowledge, attitudes, and judgement to provide safe and competent care in their professional practice, which applies to the use of AI in service provision. Regulated members should not integrate AI systems and tools unless they have developed the necessary competencies for using the technology in a way that ensures safe and competent care.
Should regulated members choose to use AI in their practice or should they be mandated by their employer to integrate AI systems and tools into their practice, they should engage in ongoing professional development to maintain or enhance their competence with AI.
This may include:
- Understanding the basic principles, processes, benefits, risks, and limitations of AI systems and tools used.
- Understanding how the technology and any information generated can be safely and ethically used in client care.
- Developing and maintaining operational competency required for effective use of AI systems and tools (e.g., writing prompts for GenAI).
- Staying up to date on advancements of AI systems and tools used.
Documentation and Information Management
ACSLPA’s Standard of Practice 4.3 Documentation and Information Management applies to any documentation created with the support of AI. At all times, client records must accurately and completely reflect the professional’s involvement in care. As regulated members are required to maintain clear, accurate, and complete records, they should not rely solely on AI for completion of documentation. Regulated members should review and verify AI-generated documentation for accuracy, appropriateness, and compliance with ACSLPA’s documentation standards before finalization and inclusion in client records.
ACSLPA recommends providing notations of:
- whether assistive AI technology was used to generate documentation;
- which technology was used and in what context; and
- who the documentation was reviewed by, prior to inclusion in the client file.
In addition, regulated members should take care to ensure that AI-generated records include only the information needed to accurately record only the clinical encounter and the information needed for subsequent care and treatment. Extraneous information or information about third parties not involved in the encounter should not be included in client records. For example, if a client shares information about upcoming family holiday plans, or a family member’s illness, this information would not be considered relevant to their clinical encounter and should not be recorded in their visit record. A good rule of thumb when using AI to generate client records is to review to ensure that the record includes only what a healthcare provider would record as relevant clinical information.
Regulated members are advised to review the College’s Clinical Documentation and Record Keeping guideline and Standard of Practice 4.3 Documentation and Information Management for more information on documentation minimum standards.
Culturally and Linguistically Appropriate Care
Under ACSLPA Standard of Practice 1.3 Client Assessment and Intervention, regulated members are required to provide culturally and linguistically appropriate care. Regulated members also have a broader requirement under the ACSLPA Code of Ethics to practice with cultural humility and to avoid cultural harm in their practice. As such, it is important for regulated members to maintain an awareness of bias risks when using AI. This includes the potential for AI systems and tools to generate information which include harmful stereotypes or incorrect information for different population groups. Regulated members are expected to be knowledgeable about the data sets of any AI technology used in order to make determinations about its appropriateness for use with clients.
Depending on the diversity of the data set used to generate output, some AI tools may be inappropriate for use with certain clients. For example, similar to the use of standardized assessment tools, if the AI tool is trained on data from a homogenous sample of individuals that is not inclusive of the client, the supporting diagnostic information that is outputted may not be appropriate for the client, as important factors from the client’s background will not have been incorporated into the tool’s data processing and analysis and may not be reflected or incorporated into any outputs provided. Regulated members should be prepared to identify these types of contraindications for use of individual AI systems and tools related to bias and the provision of culturally and linguistically responsive care.
Regulated members are expected to apply strong clinical reasoning and oversight to the interpretation of any AI-generated information, accounting for the demographics and health context of the client, to ensure that the obligation to provide culturally and linguistically appropriate care is met.
Evidence Informed Practice
AI systems and tools have the potential to improve efficiency with literature searches and research synthesis. However, AI should be considered a support tool for evidence-informed practice and must not replace a professional’s critical appraisal, expertise and experience, or client perspectives. Regulated members should remain mindful that AI research tools may be particularly susceptible to publication bias, whereby the data set drawn on may be limited to only studies published based on the direction or strength of the findings within.
If utilizing AI to support evidence-informed practice, regulated members should:
- Verify any AI-generated references, as AI has been shown to produce incomplete or fabricated citations.
- Cross-check any AI-generated references against the original sources to verify the accuracy of the information provided.
- Integrate AI-generated summaries with their clinical expertise, client preferences, and any contextual factors impacting care.
- Ensure that client information is not inputted when using AI tools for evidence-informed practice purposes. Regulated members should note that a client’s privacy may be exposed by the clinical uniqueness of the case, or their particular circumstance, and exercise caution when inputting this type of information into AI systems and tools.
- Maintain an awareness of the potential for bias and oversimplification in AI-generated research summaries.
As with the other aspects of practice noted in this guideline, ACSLPA’s Standard of Practice 1.2. Evidence-Informed Practice applies when AI is integrated into this practice.
Clinical Supervision
A regulated member of ACSLPA is responsible and accountable for the services delivered by personnel under their direction and supervision, which includes the use of AI by support personnel and students. As such, regulated members are required to maintain an awareness of the use of GenAI systems and tools by personnel under their supervision and to apply the same standards of care as they would if using AI themselves.
For more information on the minimum standards for clinical supervision, including what kind of personnel are considered supervisees, please see Standard of Practice 4.4 Clinical Supervision.
Implementing AI in professional Practice
Employer Policy
Some employers may already have policy and procedure documents that provide guidance on the use of AI specific to their employees’ work settings and scopes. Regulated members are advised to consult with their employers or administrators if implementing AI in their practice, to ensure compliance with employer policy and procedures, as well as ACSLPA minimum standards. It is considered the responsibility of regulated members to work with their employers to ensure that employers are aware of the College’s minimum standards and requirements with respect to AI use in the provision of care, and that employers implement AI in workplaces in a manner that keeps regulated members in compliance with minimum standards.
AI Vendors
AI vendors, and the policies and procedures they put in place for their products, are an integral part of public protection when AI is used to help deliver services. Vendor responsibilities, transparency, and legislative and regulatory compliance should be verified with vendors prior to implementing an AI system or tool in the workplace.
Regulated members should consider avoiding using an AI system or tool if its vendor cannot provide sufficient information to demonstrate that the product would allow the regulated member to practice in compliance with ACSLPA standards or legislative requirements. Vendors who do not include or provide transparency regarding data sets, algorithm training, output processes, or privacy and security should be avoided.
Developing Policy and Procedural Guidance
To ensure minimum standards are met when using AI in service provision, employers, clinic owners, and SLPs and audiologists may wish to develop a robust set of policies and procedures to govern the use of AI within their practice. These guidance documents can be tailored to particular practice settings and types of clients supported.
Implementation Questionnaire
To support the implementation of AI in professional practice for regulated members wishing to do so, ACSLPA has provided a list of questions to reflect on when considering using AI in practice. Please note that this questionnaire provides a non-exhaustive list of general considerations. There may be other important considerations specific to regulated members’ practice settings and scopes that should be considered prior to implementing AI in professional practice.
| Understanding the technology |
Do I have a basic familiarity with how this technology works? What kinds of outputs does it produce and how? What are the risks and limitations of this technology? What steps can I take to mitigate risks? Do I have a strong rationale for using this technology in my practice? What other information do I need to strengthen my rationale? Will the implementation of this technology be feasible for my practice/work setting? Will using this technology allow me to practice in compliance with all legislative and minimum standard requirements? |
| Accountability |
How will I integrate this AI technology into my practice? How will it be used in client care? What goal(s) am I trying to achieve by implementing this AI technology into my practice? How will I validate the accuracy, appropriateness, and relevance of outputs? Have I consulted with my employer or administration about using this technology? Do I have the necessary competencies to use this technology safely and efficiently? If not, how can I get these competencies? Is there a plan for ongoing training to stay up to date with any updates to the technology? How will I ensure that I maintain oversight of client care when using this technology? |
| Transparency |
What information must be shared with clients about how AI technology is used in their care? How will I share this information? Do I know enough about the technology to inform clients about its use?
Have I incorporated information about the use of AI systems and tools into my informed consent process? |
| Bias |
What are the bias risks associated with this AI technology? Is the data or training set used in this technology representative of my client population? How can I verify this? |
| Security |
Have I reviewed the vendor’s privacy policy and contractual terms, including its data use, management, retention, and storage policies? Does this AI technology adequately safeguard client privacy and confidentiality in compliance with relevant legistation and minimum standards? |
| Monitoring use |
Is the AI technology functioning as intended? Have there been any unintended consequences of using this AI technology in my practice? What are they? What benefits, risks, and limitations have I observed from using this technology? Are there any contraindications against continued use of this technology? Have I maintained an appropriate level of oversight when using this AI technology? How can I ensure that my independent clinical reasoning and professional judgement is maintained with continued use of this technology? Am I keeping up to date with the updates and changes to this technology? |
Regulated members are reminded that they can contact the College with questions about the responsible use of AI in professional practice.
References & Resources
Bhardwaj, A., Sharma, M., Kumar, S., Sharma, S., & Sharma, P. C. (2024). Transforming pediatric speech and language disorder diagnosis and therapy: The evolving role of artificial intelligence. Health Sciences Review, 12. https://doi.org/10.1016/j.hsr.2024.100188
College of Physicians and Surgeons of Alberta. (2023). Advice to the profession: Artificial intelligence in generated patient record content. https://cpsa.ca/wp-content/uploads/2023/08/AP_Artificial-Intelligence.pdf
College of Physicians & Surgeons of Manitoba. (2024). Advice to the profession on the responsible use of artificial intelligence in the practice of medicine. https://cpsm.mb.ca/assets/Advice/Advice%20to%20the%20profession%20-%20AI.pdf
College of Physiotherapists of Alberta. (2025, September). Artificial intelligence guide for Alberta physiotherapists. https://www.cpta.ab.ca/docs/303/AI_Guide.pdf
College of Physiotherapists of Ontario. (2025). Artificial intelligence – Principles for physiotherapists. https://collegept.org/resource/artificial-intelligence-principles-for-pts/
Du, Y., & Juefei, F. (2023). Generative AI for therapy? Opportunities and barriers for ChatGPT in speech-language therapy. Tiny Paper @ ICLR 2023. https://openreview.net/forum?id=cRZSr6Tpr1S
Engineers & Geoscientists British Columbia. (2024). Practice advisory: Use of artificial intelligence (AI) in professional practice. https://tools.egbc.ca/Registrants/Practice-Resources/Guidelines-Advisories/Document/01525AMWZDBFA4VTKQBRHJXVI6AR4UUFGV/
Use%20of%20Artificial%20Intelligence%20in%20Professional%20Work
Government of Canada. (2024, March 11). What is AI? https://www.canada.ca/en/department-national-defence/corporate/reports-publications/dnd-caf-artificial-intelligence-strategy/what-is-ai.html
Government of Canada. (2025, June 3). Guide on the use of generative artificial intelligence. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html
Reis, M., Reis, F., & Kunde, W. (2024). Influence of believed AI involvement on the perception of digital medical advice. Nature Medicine, 30, 3098–3100. https://doi.org/10.1038/s41591-024-03180-7
Slavich, B. K., Atcherson, S. R., & Zraick, R. (2024). Using ChatGPT to improve health communication and plain language writing for students in communication sciences and disorders. Perspectives of the ASHA Special Interest Groups, 9(3), 599–612. https:///doi.org/10.1044/2024_PERSP-23-00167
Suh, H., Dangol, A., Meaden-Kaplansky, H., Miller, C.A., & Kientz, J. A. (2024). Opportunities and challenges for AI-based support for speech-language pathologists. CHIWORK ’24: Proceedings of the 3rd Annual Meeting of the Symposium on the Human-Computer Interaction for Work, 14, 1–14. https://doi.org/10.1145/3663384.3663387