New Law About Augmentative/Alternative Communication Training in Virginia

Virginia passed a new law requiring IEP teams to document whether a learner needs augmentative and alternative communication (AAC) training as part of their IEP services. The bill defines AAC broadly including gestures, facial expressions, writing, communication boards, and speech-generating device, and ensures that divisions recognize training as an essential support for communication access. The law went into effect July 1, 2025. Read the full text here: https://lis.virginia.gov/bill-details/20251/SB1034/text/SB1034

The focus on training highlights that communication barriers often stem from environments that are not prepared to support a learner’s AAC system. This law helps ensure learners have consistent, knowledgeable partners who understand their AAC tools and support their ability to participate and express themselves throughout the school day.

Americans with Disabilities Act Title II New Accessibility Rule Webinar

The U.S. Department of Justice has finalized a new Web Accessibility Rule under Title II of the ADA that requires state and local governments, including school divisions, to ensure their public-facing websites, mobile apps, and digital content meet WCAG 2.1 Level AA accessibility standards. For public entities serving 50,000 or more people, the rule goes into effect on April 24, 2026 (smaller entities have until April 26, 2027). The rule applies to any digital tools or platforms a school division uses to provide a “service, program, or activity,” which includes instructional materials delivered online. To help professionals prepare for implementation the Department of Justice hosted a webinar which provides a summary of the rule. The following video is a recording of the webinar:

The New Accessibility Rule of the Title II of the American Disabilities Act

The U.S. Department of Justice has finalized a new Web Accessibility Rule under Title II of the ADA that requires state and local governments, including school divisions, to ensure their public-facing websites, mobile apps, and digital content meet WCAG 2.1 Level AA accessibility standards. For public entities serving 50,000 or more people, the rule goes into effect on April 24, 2026 (smaller entities have until April 26, 2027). These requirements include providing alt-text for meaningful images, ensuring sufficient color contrast, enabling keyboard navigation, captioning multimedia, structuring content with headings, and making online forms and documents accessible.

The rule applies to any digital tools or platforms a school division uses to provide a “service, program, or activity,” which includes instructional materials delivered online. When digital instructional content is part of how learners access information or participate in learning, it must meet accessibility expectations as well.

Words of the Week on the Morning Announcements at Rosa Lee Carter Elementary School

Julie Merrouni, an educator at Rosa Lee Carter Elementary School, hosts a segment on the morning news show for the entire school where she shares which words are the focus of the week to help build a culture of learning language with augmentative/alternative communication. Enjoy this short video where she introduces the words, explains where symbols that represent these words are located around the building, and shares how they can be used to support language development.

Creating Human-like Audio Conversations with CoPilot

CoPilot from Microsoft allows educators to design dynamic audio experiences from self-selected content. Educators choose materials, load them to Copilot, and then create an audio overview. The resulting audio sounds like two human podcast hosts discussing the content. Here’s a sample of the podcast hosts discussing the Robots for Everyone Initiative and a transcript of the podcast. Once created, educators can save the audio file to Onedrive and then download it to share with learners.

A screenshot of a dark-themed workspace in Microsoft Copilot Notebook. At the top, the notebook title reads “Robots for Everyone” accompanied by a small robot emoji and a button labeled “Play audio.” Under the title, links appear that say “Add Copilot instructions” and “Tell Copilot how to respond in this notebook.”A large input box is centered below the title with the prompt text “Ask Copilot about your references or another topic for this notebook.” A microphone icon appears on the right side of the box. Directly beneath are two buttons: “Summarize my notebook” and “Suggest 3 questions for this notebook.”

Lower on the page are two tabs: References (selected) and Chats. Within References, a circular button labeled “All” is highlighted, followed by a button labeled “+ New page.” A listed reference appears below titled “Robots for Everyone 🤖 2025 – 2026” with a small document icon and a timestamp that says “1h ago.”

In the bottom right corner of the screenshot, a floating media player shows the same project title “Robots for Everyone 🤖,” a progress indicator (18:42/18:42), playback speed controls, and a button labeled “Open in OneDrive.” A small note below the player reads, “AI-generated content may be incorrect.”

Screenshot of an Example of the Audio Overview feature of Microsoft CoPilot

Create Podcasts with Transcripts With Brisk

Brisk allows educators to create podcasts featuring two generative artificial intelligence voices from self-selected content. Brisk is an extension available to educators in LCPS. Educators choose a material, like a Google Slides presentation, select the Brisk icon, select Create, select Podcast, and then complete a prompt to create a podcast. Brisk then creates a website with a play button to listen to the podcast and a complete time stamped transcript. Educators can then share the website with learners to provide an audio option for how to engage with the content. Enjoy this sample titled Audio Supports: Making Learning Accessible for Everyone generated from a slide deck on audio resources available in LCPS.

A webpage screenshot showing a podcast player and transcript. At the top left is a colorful, stylized cassette tape illustration. To the right, the podcast title reads **“Audio Supports: Making Learning Accessible for Everyone.”** Below the title are details: labeled as a *Podcast*, with a duration of *1 minute*, a date of *Nov 10, 2025*, and a note that it was shared by the user. A horizontal audio playback bar shows the play button, elapsed time (01:10), volume control, and a speed selector set to *1.0x*.Below the player is a section labeled **“Transcript.”** The transcript appears as alternating short dialogue lines between two speakers, “Maria” and “Jamal.”

* Maria (00:00) introduces the topic: how technology can make learning easier for all students.
* Jamal (00:07) explains that they are talking about audio supports to help students access information.
* Maria (00:14) gives an example of text-to-speech reading difficult text aloud.
* Jamal (00:23) says it helps everyone by creating inclusive learning experiences.
* Maria (00:30) notes that many devices include tools like Immersive Reader to read words aloud and break them into syllables.

The layout is clean, with clear timestamps and speaker names beside each line of dialogue.

Sample website create from the Brisk podcast feature

Auto Creation of Mind Maps

NotebookLM from Google allows educators to design dynamic mind maps from self-selected content. Educators choose materials, load them to NotebookLM, and then create a mind map. The resulting graphic organizer is expandable to hide or reveal relevant content. Educators can then export the mind map as images which can also be embedded in other forms of media, like videos and slide decks.

A mind map titled “Robots for Everyone Project.”At the center is a main node labeled Robots for Everyone Project, with six primary branches: Core Philosophy & Purpose Every learner can acquire communication skills Achieved through evidence-based practices (EBP) Robots provide a vehicle for (sub-branch extends but not expanded further) Robots not used as a reinforcer Project History and Evolution A single sub-node indicator (chevron) showing additional content not expanded Goals and Metrics Choose language concepts to target Choose computer science standards Design experience using robots and coding Reflect on other goals (non-language/computer science) Reflect on replication/reinforcement at home Design & Delivery Process 6. Deliver educational experience Collect performance data Report performance data Reflect/collaborate on data and adjustments Repeat making necessary adjustments Team Roles and Responsibilities One sub-node indicator (chevron) suggesting more details Data and Artifact Requirements One sub-node indicator (chevron) suggesting more details Other Design Considerations One sub-node indicator (chevron) suggesting more details The layout is radial, with gray primary nodes branching to green sub-nodes. The overall visual is clean and dark-themed with curved connectors linking each idea.

A mind map created in NotebookLM about the Robots for Everyone project

Sharing Routines Through Auto-Generated Storybooks

Google Gemini has a feature called Storybook where users can create a customized picture book on a given topic by writing a text prompt. At the time of this blog post, the feature is listed as experimental. Educators can describe elements of the book, such as target audience age, art style, and more! Customized storybooks can be generated specific to a learner’s interest and goals. For instance, reading about a character moving through a series of steps or routines can help students learn that particular skill.

Screenshot of the Gemini interface showing a generated children’s storybook called “Fitz’s Morning Rush.” On the left, a text prompt requests a story about a ferret going through his morning routine at a fourth grade reading level. Gemini responds with the storybook output. On the right, an illustrated page is displayed. The illustration shows a ferret mother in a yellow apron serving a bowl of food to her child, a young ferret named Fitz, who is sitting at a small wooden table wearing a red sweater and blue jeans. They are inside a cozy kitchen with a stone floor, green cabinets, a refrigerator, shelves, and a window with light streaming in. On the facing page, text reads: “His mom had left his favorite breakfast on the table: a bowl of crunchy kibble with a few sweet berries on top. Fitz hopped onto his chair and nibbled happily, making sure not to leave a single crumb. ‘Yum!’ he said to himself.”

A sample page from the storybook titled Fitz’s Morning Rush which features a ferret character moving through his morning routine. The storybook was created with Google Gemini’s Storybook Gem

Auto Creation of Video Content

NotebookLM from Google allows educators to design dynamic video experiences from self-selected content. Educators choose materials, load them to NotebookLM, and then create a video overview. The resulting video is similar to a slide deck with a human-like narrator. Here’s a sample video created by the Assistive Technology Specialist specific to accessible educational materials titled Unlocking Learning: AIM in VA. Once a video is created, educators can download the file to edit in a video editor (such as WeVideo) or share directly with learners if no edits are necessary.

Screenshot of the NotebookLM interface titled “AIM VA Navigator.” The screen is divided into three panels. On the left is the “Sources” panel listing seven selected documents, including AIM Considerations, Accessibility to Digital Texts and Beyond in LCPS, Digital Rights Manager, Eligibility Requirements, Home, IEP Documentation, and LCPS AIM VA Guidance Document. The center “Chat” panel displays a summary explaining Accessible Instructional Materials in Virginia, outlining eligibility requirements, roles of school personnel, and technologies for providing AIM. At the bottom of this panel are buttons for saving notes, adding notes, generating an audio overview, and creating a mind map. On the right is the “Studio” panel showing a video overview titled “Unlocking Learning AIM in VA” with a still image slide. The slide has a blue box with the title “Accessible Materials AIM” and the text “Print based educational materials converted into specialized formats to meet student needs,” alongside a magnifying glass graphic. A video playback bar at the bottom indicates the video is 5 minutes long with 1 minute and 12 seconds played.

NotebookLM Video Overview in the Studio Panel

Creating Human-like Audio Conversations with NotebookLM

NotebookLM from Google allows educators to design dynamic audio experiences from self-selected content. Educators choose materials, load them to NotebookLM, and then create an audio overview. The resulting audio sounds like two human podcast hosts discussing the content. Here’s a sample of the podcast hosts discussing the accomplishments of the Specialized Instructional Facilitators – Assistive Technology and the Assistive Technology Specialist during the 2024 – 2025 school year and a transcript of the podcast. Once created, educators can download the audio file to share with learners.

Screenshot of Google’s NotebookLM interface showing a project titled “Inclusive Design and Assistive Technology Accomplishments: 2024–2025.” The screen is divided into three main panels. On the left is a “Sources” panel with one document selected. In the center is a “Chat” panel displaying a summary of the document with headings, emojis, and descriptive text. At the bottom of this panel are buttons for saving notes, adding notes, generating an audio overview, and creating a mind map. On the right is a “Studio” panel with tiles labeled Audio Overview, Video Overview, Mind Map, and Reports, along with a section for an interactive audio file. At the bottom right is a playback bar showing an audio recording titled “Unlocking Potential: How LCPS…” with play and note options. The top navigation bar contains controls for Analytics, Share, and Settings.

NotebookLM – Audio Overview is available in the Studio panel on the right.