Treating diabetic retinopathy Treating diabetic retinopathy is a critical aspect of preserving vision and maintaining the overall quality of life […]
With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed toward comparing information processing in humans and machines. These studies are an exciting chance to learn about one system by studying the other. Here, we propose ideas on how to design, conduct, and interpret experiments such that they adequately support the investigation of mechanisms when comparing human and machine perception. We demonstrate and apply these ideas through three case studies. The first case study shows how human bias can affect the interpretation of results and that several analytic tools can help to overcome this human reference point. In the second case study, we highlight the difference between necessary and sufficient mechanisms in visual reasoning tasks. Thereby, we show that contrary to previous suggestions, feedback mechanisms might not be necessary for the tasks in question. The third case study highlights the importance of aligning experimental conditions. We find that a previously observed difference in object recognition does not hold when adapting the experiment to make conditions more equitable between humans and machines. In presenting a checklist for comparative studies of visual reasoning in humans and machines, we hope to highlight how to overcome potential pitfalls in design and inference.
Comparing human and machine visual perception can be challenging. In this work, we presented a checklist on how to perform such comparison studies in a meaningful and robust way. For one, isolating a single mechanism requires us to minimize or exclude the effect of other differences between biological and artificial and to align experimental conditions for both systems. We further have to differentiate between necessary and sufficient mechanisms and to circumscribe in which tasks they are actually deployed. Finally, an overarching challenge in comparison studies between humans and machines is our strong internal human interpretation bias.
Using three case studies, we illustrated the application of the checklist. The first case study on closed contour detection showed that human bias can impede the objective interpretation of results and that investigating which mechanisms could or could not be at work may require several analytic tools. The second case study highlighted the difficulty of drawing robust conclusions about mechanisms from experiments. While previous studies suggested that feedback mechanisms might be important for visual reasoning tasks, our experiments showed that they are not necessarily required. The third case study clarified that aligning experimental conditions for both systems is essential. When adapting the experimental settings, we found that, unlike the differences reported in a previous study, DNNs and humans indeed show similar behavior on an object recognition task.
Our checklist complements other recent proposals about how to compare visual inference strategies between humans and machines (Buckner, 2019; Chollet, 2019; Ma & Peters, 2020; Geirhos et al., 2020) and helps to create more nuanced and robust insights into both systems.
At Amplify with Dr Wernick I was seeking help for seemingly intractable, probably age-related dryness. I've seen other doctors about it, and that has been helpful, but what he explained to me about it and the careful way he answered all my questions gave me so much more of a clear understanding of what is going on (and is not) that I am more able to implement all his and others' recommendations than I was before. And he gave me additional resources for further follow-up. I am most grateful.
Wow! This is a great Eye Care medical facility. I was thoroughly examined by Dr. Pinkhasov for over 2 hours. She made sure to check my eyes for pretty much everything and patiently explained proper care for my eyes. They definitely know how to provide great care and treat their patients right. Now I know why they have such a great reputation and been around for so long.
Dr. Kavner is a gifted diagnostician and orthoptic therapist. He treated me several decades ago for a condition similar to dyslexia. I was having migraines five times per week. I worked with him for about a year and I experienced tremendous improvement (down to 3-4 per year) that has lasted.
Dr. Kavner recommended two types of eye therapy for my daughter. One of them using bio-feedback. In just three sessions she is seeing considerably better. She shouted this morning: Ooh my God! I could not see these letters with my glasses on, and now I can see them without my glasses. If you are willing and able to invest in improving your vision, this is a good place to go to!
Dr. Kavner recommended two types of eye therapy for my daughter. One of them using bio-feedback. In just three sessions she is seeing considerably better. She shouted this morning: Ooh my God! I could not see these letters with my glasses on, and now I can see them without my glasses. If you are willing and able to invest in improving your vision, this is a good place to go to!
I have always found Dr Kavner's work, expertise and wisdom of the highest caliber. As one of the fathers of OT, occupational othomology, his depth and breadth of knowledge about the eyes' health and wellbeing of the patient is exemplary. Cannot say enough good things about him.
Treating diabetic retinopathy Treating diabetic retinopathy is a critical aspect of preserving vision and maintaining the overall quality of life […]
Myopia, also known as nearsightedness, is one of the most common vision problems people experience. In this blog post, we'll […]
As an optometrist at Amplify EyeCare Manhattan in New York, one of the most common questions Dr. Nathaniel Wernick often […]