"It ain't what you do, it's the way that you do it!"
Do you mean that the way an e-reader device is used in the environment will be a more important consideration for comfortability (i.e., eye strain) than what e-reader device is used?
@Steve - not so much, although that is probably something to factor into any study. I mean that it is not important that it is an e-reader, but it is important how it is designed.
There are way more current and possible designs for e-readers than for p-readers (notwithstanding - or perhaps because of - thousands of years of evolution) and, if any of the design factors make a difference (and I am sure most do), it is always possible to design them differently. It's significantly more complex than that, too, because of the rich interplay between design elements.
It's exactly the same issue as comparing e-learning and p-learning and expecting to find some universal qualitative difference in learning. It's a bit like saying all paintings are better than all drawings, or all blues music is better than all classical music. Makes no sense. We can probably fairly reliably find out how one particular design configuration compares with one other design configuration, and we can probably find out that some things (e.g. shining 100 lumens of light directly into someone's eyes) are (almost) always a bad idea for at least some kinds of activity, and we might even be able to discern some generalizable patterns that have held in the past but, unless they have held 100% of the time across all contexts, there is no reason to suppose that, given our capacity to alter the design, they will hold in the future. Of course, if we do find something is a universal problem, then the next step is to look for a solution. But it is no more sensible to investigate whether learning (or reading, or art) is better (or worse) with or without electronic media than it is to investigate whether it is better (or worse) with or without glue.
Based on your noting that there is a difference between direct and indirect light to the eye, would it be useful to research human interactions to visual displays? In example, a cinema display of reflected light is a different interface to the eyes, from a digital display screen of the same size (your flashlight note). The majority of humans (with 1B+ smartphones) have adapted to digital displays. This adaptation to non-natural/virtual direct light has no biophysical change? or has exposure to such technologies been too short in our species timeline (or human lifetime) to investigate?
Interesting but not so easy. And a few links away from there is the old news. I can say that if Bruce Schneier doesn't talk about it we don't need to worry
Interesting. This is a related free webinar:
Register for Jan. 25 ACM-SIGAI Panel on Ethics in A.I with Joanna Bryson, ACM Fellows Michael Wooldridge and Stuart Wilson |
You are receiving this email because you registered for a previous ACM Learning Webinar. As such, we consider you a Webinar VIP. If you haven't done so yet, register for the next free ACM Learning Webinar, "Panel and Town Hall: Big Thoughts and Big Questions about Ethics in Artificial Intelligence," presented on Wednesday, January 25 at 12 pm ET. The panelists include Joanna Bryson, Associate Professor at the University of Bath; Stuart Russell, Professor at UC Berkeley and Adjunct Professor at UC San Francisco; and Michael Wooldridge, Professor at the University of Oxford. Moderating the discussion will be Nicholas Mattei, Research Staff Member at the IBM TJ Watson Research Laboratory and Rosemary Paradis, Principal Research Engineer at Leidos Health and Life Sciences. (If you'd like to attend but can't make it to the virtual event, register now to receive a recording of the webinar when it becomes available.) Note: You can stream this and all ACM Learning Webinars on your mobile device, including smartphones and tablets. There has been a torrent of news, announcements, and discussions in the last year about the ethics of artificial intelligence (AI) and the impact AI can and may have on society. Thinkers and groups from all corners have entered the discussion: from multiple statements by the White House about artificial intelligence and the future of work and the economy; to new academic and research centers for ethics in artificial intelligence at Oxford and the Allen Institute; to large corporations forming the Partnership for AI. We sit down with 4 panelists to discuss what's hot, what they see on the horizon, and to answer your questions. Interested students should also consider submitting their thoughts to the ACM SIGAI Student Essay Contest on the Responsible Use of AI Technologies where they can win cash and chats with leading AI researchers. More details are available at sigai.acm.org/aimatters/blog/tag/contest/. To submit questions before the day of the panel please visit goo.gl/forms/iDPXdV0p9fHnFOhx1.Duration: 60 minutes (including audience Q&A) Panelists: Stuart received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences and holder of the Smith-Zadeh Chair in Engineering. He is also an Adjunct Professor of Neurological Surgery at UC San Francisco and Vice-Chair of the World Economic Forum's Council on AI and Robotics. He is a recipient of the Presidential Young Investigator Award of the National Science Foundation, the IJCAI Computers and Thought Award, the World Technology Award (Policy category), the Mitchell Prize of the American Statistical Association and the International Society for Bayesian Analysis, and Outstanding Educator Awards from both ACM and AAAI. In 1998, he gave the Forsythe Memorial Lectures at Stanford University and from 2012 to 2014 he held the Chaire Blaise Pascal in Paris. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations. His books include The Use of Knowledge in Analogy and Induction, Do the Right Thing: Studies in Limited Rationality (with Eric Wefald), and Artificial Intelligence: A Modern Approach (with Peter Norvig). Michael Wooldridge, Professor at University of Oxford; ACM, AAI, EURAI, AISB, BCS Fellow Michael is the Head of Department and Professor of Computer Science in the Department of Computer Science at the University of Oxford, and a Senior Research Fellow at Hertford College. He joined Oxford in 2012; before this he was for 12 years a Professor of Computer Science at the University of Liverpool. Michael’s main research interests are in the use of formal techniques of one kind or another for reasoning about multiagent systems. He is particularly interested in the computational aspects of rational action in systems composed of multiple self-interested computational systems. His current research is at the intersection of logic, computational complexity, and game theory. He has published more than 300 articles in the theory and practice of autonomous agents and multiagent systems. He is an ACM Fellow, an AAAI Fellow, a EURAI Fellow, an AISB Fellow, a BCS Fellow, and a member of Academia Europaea. In 2006, he was the recipient of the ACM Autonomous Agents Research Award. In 1997, he founded AgentLink, the EC-funded European Network of Excellence in the area of agent-based computing. He is the President of the International Joint Conference on Artificial Intelligence (IJCAI); was the co-editor-in-chief of the Journal Autonomous Agents and Multi-Agent Systems; an associate editor of the Journal of Artificial Intelligence Research (JAIR); an associate editor of Artificial Intelligence journal and served on the editorial boards of the Journal of Applied Logic, Journal of Logic and Computation, Journal of Applied Artificial Intelligence, and Computational Intelligence. Moderators: Rosemary Paradis is a Principal Research Engineer for Leidos Health and Life Sciences out of Gaithersburg, MD. Her current work as a data scientist for Big Data analytics includes building models in computational linguistics and natural language processing, machine learning, and artificial intelligence. She has an M.S. in Computer Science from Union College, and a Ph.D. in Computational Intelligence from Binghamton University. Dr. Paradis has a number of patents and publications in the area of recognition algorithms, artificial intelligence, and machine learning. Previous work at Lockheed Martin included the design and development of machine learning algorithms and managing the Core Recognition and Identification technology development for the USPS, the Royal Mail, and the Sweden Post Office. Dr. Paradis has held positions at General Electric, IBM, and also was a professor at Hartwick College, Ithaca College and Rochester Institute of Technology. She is currently the Secretary/Treasurer for the ACM Special Interest Group on Artificial Intelligence (SIGAI). |
Jon,
Thanks for sharing this article. It was great to see such a succinct discussion of the topic and research.
I am in the midst of creating a syllabus for a course i am teaching in the next semester on Educational Psychology. I am going to add this reading to the courrse.
By the way, the course is going to be self-directed and take place in Pace Commons (my Elgg site).
Best,
Gerald
Jon,
This is a beautiful tribute to your mentor. Thanks so much for sharing it.
Your writing put me in mind of my mentor at Pace, also named Sandra. It is amazing to see how we are shaped and influenced by the people in our networks. And how their sense of things gets tranferred and interalized by us, the mentees.
It's time for a change. And not only grades, but the choice of learning. Lots to learn from homeschooled kids and their parents.
The Landing is a social site for Athabasca University staff, students and invited guests. It is a space where they can share, communicate and connect with anyone or everyone.
Unless you are logged in, you will only be able to see the fraction of posts on the site that have been made public. Right now you are not logged in.
If you have an Athabasca University login ID, use your standard username and password to access this site.
We welcome comments on public posts from members of the public. Please note, however, that all comments made on public posts must be moderated by their owners before they become visible on the site. The owner of the post (and no one else) has to do that.
If you want the full range of features and you have a login ID, log in using the links at the top of the page or at https://landing.athabascau.ca/login (logins are secure and encrypted)
Posts made here are the responsibility of their owners and may not reflect the views of Athabasca University.