Our Takeaways from the 41st Annual CSUN Assistive Technology Conference
In March, we returned to Anaheim for the 41st annual CSUN assistive technology conference. As usual, we were overwhelmed in the best way.
We got the chance to meet up with many of our readers, partners, and friends, plus a whole bunch of new faces.
This event is one of our favorites because it is the undisputedly largest gathering of people who care deeply about accessibility and assistive technology, whether they are building it, testing it, advocating for it, relying on it, or all of the above. You can sit down for breakfast and strike up a conversation with someone you’ve never met, who is deeply familiar with the work that you’re doing. Just like that, you’ve made a new friend. It’s quite invigorating.
For those who were not able to attend, we hope this post brings you in on some of that energy. For those at the center of the action, we would forgive you if you didn’t manage to catch everything, much less remember and keep it all together by the time you got home. Despite taking a copious amount of notes, we always find ourselves learning from reflection and recap posts because no one can be everywhere at once. We hope you get something from ours as well.
This post was originally written as a section for our accessibility industry update newsletter. As with last year, it rapidly grew out of control. It largely fits into dedicated sections, so feel free to skip around to find the stuff that interests you.
A Few Stats
While numbers have not officially been published, this year anecdotally felt busier than last year, which purportedly drew more than 5,000 people. Multiple attendees described it as the busiest CSUN since before COVID.
There were 358 unique sessions compared to 343 last year, representing a 4.37% increase.
The top five topics (digital accessibility, blind/low vision, Artificial Intelligence, education, and design) accounted for over 70% of sessions.
Additionally, 126 different companies purchased booths in the exhibit hall, most of which had demos and/or free swag. We spent a lot of time here.
The Keynote
The event kicked off with a timely, highly relevant keynote speech given by Haley Moss. “Diagnosed with autism at the age of three, Haley Moss’ parents were told that she might not ever finish high school or earn a driver’s license. Today, she is a neurodiversity expert, educator, lawyer, and the author of several books that guide neurodivergent individuals through professional and personal challenges. Haley is a consultant to top corporations and nonprofits that seek her guidance in creating an inclusive workplace, and she is a sought-after commentator on disability rights and the Americans with Disabilities Act.” The session is worth the watch. What follows is a list of paraphrased quotes that really stuck out to us.
- Even in times of transition, our commitment does not waver. The work must continue because people with disabilities rely on you (and us). Access is not optional. Equity is not seasonal. Inclusion is not contingent upon political climate.
- Life is not a DIY project but an interdependent existence. We rely on each other. Every single one of us needs help from other people. The help that we need may look different depending on what we are struggling with or would rather delegate.
- Moss encourages neurodivergent individuals to reject “Normalcy” and embrace their unique interests (like her passion for Pokémon). These perspectives are important not just because they create joy, but because they inspire innovation. “The price of acceptance should not be conformity.”
- Advocacy doesn’t always have to be a grand gesture; it can be as simple as making a new friend, asking a respectful question, or showing up at a conference to share a story.
AI and Accessible Code
AI coding tools produce inaccessible code by default. This is not a new revelation, but two sessions really drove it home from different angles.
Michael Fairchild from Microsoft presented a11y-llm-eval, a benchmarking tool that evaluates how well LLMs produce accessible code. Without accessibility prompting, most models scored near zero. GPT 5.2 led at 41% passing. Amazingly, the rest of the models (including the Gemini and Claude families) scored near 0. The average was roughly 10%. Custom instructions help a lot. Simply adding “All output MUST be accessible” to the prompt gained 18 percentage points. Adding “All output MUST be accessible. Use semantic HTML first; only use ARIA when necessary, and ensure full keyboard support. Conform to WCAG 2.2 Level AA.” resulted in a jump by 37%. Expert-level instructions, published in the Awesome Copilot project, were able to push some models above 90%.
His advice fits into a few points. Be precise with language (use MUST/SHOULD), use lists, ask the agent to help optimize your instructions, do not put critical resources behind links, and do not paste entire WCAG standards into your prompt.
Karl Groves from AFixt ran a vibe coding accessibility experiment. He tested 12 AI coding tools (ChatGPT/Codex, Cursor, Claude Code, GitHub Copilot, Gemini Code Assist, Windsurf, Devin, Replit, Bolt, Lovable, and Amazon Nova). Each got a single prompt to create an online pizza ordering form, once mentioning accessibility and once without. There was a drastic difference in the outputs. It appears that, like the humans who created these models, “If you don’t prompt for it, you won’t get it.”
It’s great that we now have several benchmarks that can be used to measure the degree to which GenAI models generate accessible code, plus instructions and techniques to make it better. The smaller part of the battle will be getting developers to take the few minutes required to adopt these techniques. The biggest piece is as it always has been, helping cultivate the right mindset.
A key lesson here is to bake accessibility into your prompts, your system instructions, and your review process. Then encourage everyone you know to do the same.
Braille
Multi-line Displays
Multi-line refreshable braille displays are all the rage right now, and for good reason! For decades, braille displays were only able to show one line at a time, and the length was capped at anywhere from 12 to 80 characters depending on the hardware. Imagine being limited to a single dimension of screen real estate. It’s not great. Oh yeah and say goodbye to pictures of any kind. Until now, this was the state of the art.
The problem has been well documented. Braille displays are usually comprised of cells, which represent a single character. These cells hold eight dots. Dots are composed of tiny, fragile, intricate components that need to be able to raise and lower rapidly. If just one dot is damaged, it can seriously impact comprehension. So, you have 8 dots per cell * the number of cells on the display, all of which need to raise or lower with little to no latency, tens of millions of times over the lifespan of the device. It’s expensive.
Because of some nerdy engineering marvels that would fill a book, this problem has mostly been solved. Multi-line displays show more context at once, and more importantly, render tactile graphics. The most popular ones are currently the Monarch (Humanware/APH), Orbit Slate (Orbit Research), DotPad (Dot Inc), and Codex (NewHaptics). Basically, all received an update of some kind.
- The Monarch from APH continues to mature. New apps include a periodic table, Wordstock (kind of like Wordle), Echo Explorers (from the Cyberchase PBS KIDS series), plus a new multiplayer mode for the onboard Chess app (powered by Lichess). A new Project, Monarch RISE, is creating a community of people using the Monarch display and sharing stories, strategies, and techniques toward competitive integrated employment (CIE). They recently announced that they are “accepting new applications from college students, individuals who have just started in their career field (2 or fewer years), and individuals who are underemployed.”
- Dot Inc. showed off the Dot Pad X, their multi-line display with tactile graphics. Their software ecosystem includes Dot Canvas for tactile graphics, Dot Vista for AI-generated tactile-ready images, Dot Explore for early braille instruction, and an open SDK that allows developers to fully interface with the product (like the A. T. Guys Braille Apps collection). They also revealed the Nemonic Dot, a pocket-sized braille printer combined with a mobile app for indoor and outdoor labels.
- NewHaptics demonstrated their pneumatic haptics display (Codex) that uses compressed air to raise braille cells. It has four lines of 32 characters, with an integrated touchpad allowing you to route the cursor or perform a wide variety of actions without the need to move away from the display. They have an impressive collection of built-in apps and games that demonstrate the potential of the display (hangman, Battleships, minesweeper, Whac-A-Mole, etc.), but probably the coolest one is their audio editor, which is essentially a braille accessible DAW. It can render a waveform through touch-capacitive navigation. When you want to drop the insertion point at a spot in the file, just double tap on the touch pad at that position.
- The Canute from Bristol Braille Technology and the Cadence tablet with modular, variable-height cells were also present.
The BrailleNote Evolve
HumanWare recently announced the BrailleNote evolve, a braille notetaker running full Windows 11 pro on an Intel Ultra Core 5 processor with 32 GB of RAM. This one is huge because it is the first computer designed around braille that provides specs on par with a modern laptop. It ships with NVDA and a free six-month JAWS subscription through a new Vispero partnership, which is another first.
It will be made available in 20, 32, and 40-cell models with a Perkins keyboard, dedicated arrow and modifier keys, and physical cursor routers. A QWERTY version is planned later this year. At $6,600, the question is whether it will hold up to demanding or long-lasting tasks.
Other Displays
- Orbit Research brought along the Strata line with piezoelectric cells (quieter and faster than their Orbit cells) and the Flow family. These offer an ultra-light form factor, no battery, no Bluetooth, entirely powered by USB. You basically just plug it in and let the braille HID protocol (which is widely supported by modern screen readers) handle the rest.
- Vispero released the Focus 640 sixth generation, with redesigned cells and a new four-way D pad alongside the existing thumb keys, rocker bars, and programmable buttons.
- Selvas announced the BrailleSense 7, which has a feature that will automatically cycle to the next line when you get to the end of the current one. It also has a Gemini AI button which just goes to show that they’re really embracing AI here, something we haven’t seen on many braille notetakers up to this point. Interestingly, it will ship one version of Android behind, which could impact compatibility with certain apps.
- Beacon Street showed a two-line, 40-cell prototype at the Aira IT booth. The form factor was super sleek and compact, something you could easily drop into a bag and forget about until you needed it. This didn’t seem ready for prime time (yet) but is one we want to watch.
Kiosks and Public Access
Kiosks aren’t something that come to mind for most people when they think digital accessibility. However, the European Accessibility Act (which went into effect in June of 2025) requires that new installations support accessibility features, and some businesses have an indirect obligation to do so in the U.S. as well under Section 508, HHS Section 504, and the Air Carrier Access Act. So, there is definitely a growing market.
- LG Electronics, in partnership with Dot Inc, unveiled a height-adjustable accessible kiosk that comes with an out-of-the-box braille panel, sign language video guidance, a screen reader, and height adjustment system for wheelchair users. CSUN was the first time that users are getting the chance to interact with it publicly.
- Not to be outdone, Sony showed off their accessible retail kiosk with braille and audio product descriptions, developed as a collaboration with the Braille Institute. This creation was developed in partnership with the community and based on feedback received at the conference, with the first prototype appearing around 2018. It is now in 925 Best Buy stores across the U.S. and growing.
Legal Landscape
Lainey Feingold delivered a digital accessibility legal update that is an extension to the talk given at Axe-Con earlier in the year. You can find the full resource page here. Here is a brief summary:
- The Title II final rule requiring state and local governments to make websites and mobile apps accessible stands, with looming deadlines of April 24, 2026 (for entities with a population of 50,000 or more) and April 26, 2027 (for smaller entities under 50,000 and special districts). There are outstanding government efforts to change it. “The rule is the rule until it isn’t.” Multiple court cases (Texas voting, Louisiana, WVU, NYC blind juror) confirm the ADA applies to government websites regardless of WCAG-specific regulation.
- Healthcare organizations that receive federal financial assistance and have 15 or more employees must meet provisions of a rule requiring them to “make all programs and activities provided through electronic and information technology accessible; to ensure the physical accessibility of newly constructed or altered facilities; and to provide appropriate auxiliary aids and services for individuals with disabilities” by deadlines starting on May 11, 2026. A Kaiser telehealth settlement and a blind employee medical records case show active enforcement.
- Section 504 of the Rehabilitation Act is under attack. Disability organizations and the Center for American Progress have pushed back.
- The Department of Education has seen OCR layoffs and $38 million paid to laid-off staff, while most complaints were dismissed.
- The Department of Justice sued Uber over service dog and wheelchair denials under Title III of the ADA. A settlement was reached for prisoners with hearing loss in North Carolina after complaints were received alleging that NCDAC failed to provide effective alternatives for communication. The Equal Employment Opportunity Commission sued Pearson on behalf of blind employees, after Pearson contracted with inaccessible third-party vendors providing benefits and training.
- A bunch of proposed legislation including a Web and Software Accessibility Act, Medical Device Nonvisual Accessibility Act, CVTA (CVAA update). On the other side, “backlash” laws have been proposed that would limit the power of web access lawsuits, especially toward small businesses.
- Overlays: “Don’t use them.” One million dollar fine against AccessiBe by the US Federal Trade Commission. Class action Lawsuit by a small business against accessiBe. Lawsuit by a small business against UserWay.
- Enforcement of the European Accessibility Act is underway.
Be My Eyes
In an after-hours event, Be My Eyes (the app that connects sighted volunteers with blind users over a video call) announced that they hit 1 million blind users and 10 million volunteers. Put another way, the global community is now nearly the size of Belgium!
They demoed Be My Eyes Workplace, an employer-facing product for supporting blind and low-vision employees.
They also launched the Be My Eyes Foundation, a nonprofit to guarantee the app (and current and upcoming AI features especially) remain free permanently.
Navigation and Orientation
- GoodMaps once again provided indoor navigation for the conference venue through step-by-step instructions, accurate within a few feet. They do this using LiDAR and the phone’s camera. A blind or low vision user need only hold their phone at chest height, follow the instructions, and avoid smaller or temporary obstacles (and people, obviously). This worked well even in the noisy, crowded Marriott. The folks at GoodMaps have apparently got scanning locations down to a science, as long as a trained technician can make it there with a LiDAR scanner. The team then uses the collected models to build out a two dimensional floor plan. We are enthusiastic about the potential of this technology and look forward to a day when it is deployed in e.g. airports and other facilities that have historically been difficult to access independently. Back to CSUN, we had many conversations with attendees that wished it worked in the exhibit hall, as it could be somewhat challenging to find specific booths even when using apps like SeeingAI to scan for nearby text.
- Audiom provided an accessible map of the conference for virtual exploration. Their approach borrows from audio games where sound, speech output, and keyboard (or mobile touch) navigation let users move through a space with features similar to Google Maps.
- Glidance is a guided mobility device that aims to serve as a more technologically sophisticated alternative to the white cane. There are many schools of thought about the viability of such a solution with incredibly strong opinions on both sides. Most other attempts to recreate the cane have failed; a quick Google search for “smart white cane” will surface dozens of master’s and PhD research projects that make bold promises but ultimately fall flat in real-world scenarios. However, this one has been in production for years, with extensive feedback from the community, and the team is working hard to ensure that as many viewpoints as possible are captured throughout its development. This year was the first time that waitlist participants were able to take it for a spin outside of the exhibit hall. While we were only able to test it there, we did notice a measurable improvement in obstacle detection relative to last year. It seamlessly navigated past a whole bunch of tables and people, which was pretty cool. The team made a point of talking about plans to integrate it with apps like GoodMaps. This was admittedly the point where we felt like it might actually hold up when faced with more challenging environments. The prospect of being able to e.g. key in a vending machine and have it just navigate you there (something GoodMaps on iOS can already do) finally feels within the realm of possibility.
- Meta is rolling out an SDK that will allow developers to integrate their glasses with third-party apps. Aira is bringing its agent (AccessAI) service to the glasses. OOrion has partnered with Meta’s SDK for text recognition and navigation. HapWare showed a wristband that pairs with the glasses for haptic feedback on social cues (waves, facial expressions, high-fives, fist-bumps, you name it).
- Agiga’s EchoVision glasses aim to be an accessibility first alternative to the Meta Ray-Bans. A common complaint of the Meta Ray-Ban family is that the speakers do not line up well with many hearing devices. These also connect via wi-fi, so if you don’t have your phone in your pocket, you’re in luck.
- ExploraVist is not built into a specific pair of glasses. Instead, the device takes the unique approach of clipping onto any pair of glasses or can even be worn on a lanyard or bracelet! We thought this was an innovative approach and look forward to seeing where it goes.
- Luna Glasses are full-color night vision glasses for night blindness. Pilot run sold out, waitlist open for the next 500-unit run.
Accessible Music
Andrii, a Ukrainian veteran, was wounded on the battlefield about a year ago, losing both of his arms and his sight. The realization that he could no longer play music was heartbreaking. He searched for solutions and discovered MIDI instruments that made it possible again. He now plays a DM48X MIDI controller harmonica, which gives him “full accessibility to any MIDI instrument.” The MIDI Association’s conference report tells his story: “When Andrii said at the end of our interview ‘MIDI is great’, we realized that all the work we do is worthwhile if we can help people like him enjoy making music again.”
- The AmeNote AptiPlay is a MIDI 2.0 controller that converts switch and analog inputs into music. Triggers notes, chords, samples, and loops. Compatible with Xbox Assistive Controller inputs. Launches Q3 2026.
- The UniMIDI Hub by Audio Modeling and Musica Senza Confini are customizable pads playable via eye-tracking, touchscreen, or MIDI devices. Won the 2024 MIDI Association Software Prototype category. Developed with the University of Milan.
JAWS, Vispero, and the Screen Reader Landscape
The Freedom Scientific brand is at long last being retired, sort of. JAWS, ZoomText, and Fusion are moving under the Vispero name. The name “Freedom Scientific” will still be used for hardware like the Focus Braille Display.
The Page Explorer feature in JAWS, which uses AI to give a human-readable overview of web pages and how best to navigate them, is now available to home annual license users.
Vispero also demoed a screen-reading AI agent that will let you give JAWS instructions like “book travel with these dates to this destination”, or “order these items from Amazon”, and it navigates the page for you (potentially working through inaccessible interfaces along the way). They expressed a desire to release by the end of the year.
Sessions
Automated Accessibility Testing for Design Systems using AI (eBay)
eBay ran two sessions on accessibility in their Evo design system.
The first: they define accessibility “contracts” per component. Contracts are things like required keyboard interactions, focus management, ARIA roles, and supported states. The Evo Playbook defines the patterns, and component libraries (Marko and React) enforce them. When expectations weren’t encoded into tests, “implementation drift” (the thing that happens when too many unrecorded ad-hoc changes slowly change the functionality of something over time) crept in across teams.
The second: using AI to generate tests across 80+ components. AI introduces systematic errors like hallucinated keyboard interactions the component doesn’t support, wrong ARIA assumptions, inconsistent configurations, and floods of low-value test cases. When this happens, the bottleneck shifts from writing tests to triaging them. Tests passed against incorrect elements or validated the wrong behavior, and what good is a passing test if it doesn’t need to be there at all? AI speeds up generation but creates cognitive debt if nobody ensures the tests reflect user interactions.
Extending Web Platform Tests for Accessibility Interoperability (Apple)
The Interop Accessibility initiative is a collaboration between Apple, Mozilla, Google, Microsoft, Igalia, and others to build shared accessibility tests in the Web Platform Tests framework.
The problem is that browsers construct their accessibility trees differently, so the same correctly implemented component can behave differently across screen reader and browser combinations. Over 1,000 tests have been written. They run in CI for browser engines, catching accessibility regressions during development. The FCC cited this work in its 2024 CVAA Biennial Report to Congress, and new tooling is expected to expand coverage by orders of magnitude in 2026.
If you’ve ever had to tell a user to “try a different browser” because a component works in Chrome but not Safari, or an accessibility defect is recorded on Chrome but not Firefox, this is the project working to fix that at the engine level.
Reimagining Accessible Graphs in a Legacy System (Khan Academy)
Khan Academy rebuilt their graphing experience from scratch. The old system used Raphael.js, an SVG library built around mouse dragging which is, perhaps unsurprisingly, unusable without a mouse. Many students couldn’t complete graphing exercises independently.
They moved to Mafs, an open-source React library for interactive math out of the belief that it was probably best to build accessibility into the core interaction model, instead of layering it in on top. Spoiler alert: it paid off.
The hard part was communicating information about the graph, including dynamic updates, to screen readers: coordinate changes as users move points, function types (linear, quadratic, sinusoidal), intercepts, shapes, etc. update in real time. Movable graph elements don’t map to standard HTML semantics, so they had to build custom approaches to meaningfully represent them. The result has full keyboard parity with mouse interaction.
Social Feedback Loop: From Insights to Opportunities (Amazon)
Accessibility feedback from Amazon customers was scattered across social media, app reviews, and other channels. Aggregating it was a manual process that happened quarterly at best.
Amazon built an AI pipeline that ingests this feedback, identifies accessibility issues, analyzes the sentiment, classifies patterns, and scores them by impact. These issues are then automatically forwarded along to the team best positioned to act on them.
Most AI-and-accessibility talk at the conference was about code generation and assistive features. Using AI to process what users with disabilities are reporting and routing it to the right teams at a scale manual review can’t touch is something that, when done right, is a neat idea.
Reaching Faculty by Empowering Academic Staff (CSUN)
CSUN’s approach to scaling accessibility in higher education is unique. It targets academic support staff (as opposed to faculty) as the primary leverage point. These staff members manage timelines, course materials, and communications, because their existing workflows put them in a strong position to influence accessibility. Requirements are aligned with key academic milestones such as textbook selection, syllabus deadlines, video captioning lead times, and course readiness before the first day of class. This removes the need for separate processes and makes accessibility part of standard operations. Responsibilities are distributed across faculty, disability services, captioning teams, libraries, and support groups, reducing ambiguity and helping staff coordinate efforts more effectively. Finally, the approach relies upon the tools that are already in use (e.g. accessibility checkers, course reports, captioning services).
Closing Gaps & Improving Access to Digital Materials for Students (NCADEMI)
NCADEMI (pronounced “N-cademy”) is a technical assistance center at Utah State University, funded by the U.S. Department of Education’s Office of Special Education Programs. They work with state and local educational agencies to get accessibility into practice with minimal friction.
What really stood out to us was how structured and actionable their model is for implementing and sustaining accessibility at scale.
It has seven “Quality Indicators” covering leadership, procurement, educator-created content, training, accessibility data, and sustainability, each of which is broken into concrete, assessable components. They push for cross-functional steering committees, accessibility baked into procurement (evaluated during vendor selection, not after purchase), and centralized inventories of accessibility status across edtech tools. NCADEMI runs year-long engagements with monthly working sessions, and their website is a wealth of practical information.
Resources and Further Reading
- Great big list of CSUN 2026 presentation links
- CSUN 2026 conference materials in accessible formats (HTML, DAISY, EPUB)
- Michael Fairchild: Embedding Accessibility into AI-Based Software Development
- Karl Groves: Vibe Coding Accessibility Experiment (GitHub)
- Microsoft a11y-llm-eval (GitHub)
- Awesome Copilot Accessibility Instructions
- Lainey Feingold: Digital Accessibility Legal Update Links
- Be My Eyes: Innovation and Community at CSUN 2026
- MIDI Association: CSUN 2026 Conference Report
- NFB Access On #67: Recapping CSUN 2026
- Vispero: March 2026 JAWS/ZoomText/Fusion Update
- HumanWare + Vispero Partnership Announcement
- Sony Electronics at CSUN 2026
What Are We Missing?
Reach out and let us know!