Hey, I study special needs education for blind/visually impairment (and for hearing and deafblindness).
I was wondering if people with low vision experience differences between reading different printed alphabets (japanese/ arabic/ latin alphabet/ cyrillic etc) and if certain scripts are easier to read than others?
Does anyone know or know how to find those things out? I discussed it with my prof and he didn't know either.
(If this post seems familiar it might be because I posted it (but worded differently) on reddit too)
The system itself is pretty simple and is like hiragana/katakana. To do Kanji there are 6dot and 8dot versions which makes it a bit more complicated.
Something I wanted to ask somebody who's familiar with interfaces for visually impaired: On Lemmy, does an inline picture with a caption display what would be the link text as an alt text? For example, can a screen reader this:
I haven't personally tested this, so this is all an assumption because of how related things work. But I've done extensive accessibility testing throughout the years and things tend to act the same.
On the web app for Lemmy, alternative text should be called out instead of the image (well, alongside since it lets you know there's an image with alt text). When you use an image as the text of a link (so, what is normally a clickable image) it should call out that it's a link, then that the link is an image, then that the image has alt text. This is not dissimilar to how things work if the image won't load and the alt text is shown instead.
On mobile apps it's the wild west, almost entirely depending on if the app developers put in the additional effort to make all of that information available to screen readers. By default not a whole lot is given other than the text itself so images are often completely skipped or called out without the alt text.