I'm going to, instead, explore some conversations and observations that I've had as well as explore a little bit of idealism.
The inspiration for this post began, as one would expect, with me supervising students in a computer lab and witnessing them have their research blocked incessantly. I really do mean "without cessation"; there were a number of students who simply gave up researching their chosen entrepreneur because every website they tried to go to was blocked.
Consider two situations: in a computer lab that a teacher is monitoring, and in which every monitor is visible, (1) a student is confronted with unblocked inappropriate material, and (2) a student is unable to continue their research due to blocked appropriate material.
In situation (1), the teacher is able to instruct the student to close the browser window or turn off the monitor, and has the power of disciplinary action if the student resists doing so or repeats the offense intentionally or indiscriminately. In situation (2), the teacher is able to do nothing but say, "Sorry, try another website."
Given the choice, I would much rather monitor students and have the ability to take action if needed than be rendered useless by a filter.
After this particular experience, I began talking to colleagues and students about content filters, and almost all of them were frustrated to some degree. Some schools have tried to take steps to at least make the internet more accessible to teachers, but the response to those attempts have been tepid.
I should mention that most public schools are required to implement a filter due to having received federal funding for their computers. I am trying to address the roots of these issues, not attack all principals.
Most interestingly, I was a sub at a private school within whose technical team I had contacts. This school does not receive federal funding for technology and does not wish to spend their own funds on a filter. They are a rather small school and are confident in the effectiveness of teacher monitoring.
|I did this during a planning period on a school computer. Twitter is blocked seemingly everywhere else.|
I spoke to some of the students at that school and gleaned some insightful commentary. The consensus seemed to be that the size of their school played a large role in the success of content monitoring, as did the environment and the type of students there. They thought that some schools would definitely need automatic filtering, but that theirs doesn't. One student stated that the rule there is, "Pretty much, 'Don't look at porn for 8 hours,' it's not that difficult."
My favorite discovery there, however, was that while there is no filter whatsoever, the students think that there is. One even claimed to have been blocked by it in the past. I did not tell them the truth, though I'm sure I looked a little surprised when I first heard a student say, "I think we have a great filter. It doesn't block too much, but does keep students safe."
If falsehood and trickery were more reliable, I would be tempted to say that it seems the right lie is the perfect solution!
I thought I was done there, but after proof-reading this post, it seems weak in the knees — like I need to inject a little bit more thought-provoking fuel. How about this to top it off: What truly qualifies as inappropriate material, and is it at all reasonable to expect automation to handle its monitoring better than teachers?
I skimmed some research that found these materials got passed every filter that the author tested:
1. Cocaine prices in major cities
2. Porn from several dozen servers
3. Live computer virus
4. Terrorist Handbook
5. Bomb Building Handbook
6. Getting high with household chemicals
They also found that websites relating to the American Red Cross, Yellowstone National Park, and Wolfgang Amadeus Mozart (among others) were often blocked.
That all seems pretty poignant to me, while at the same time, a poll at Edutopia represents the thinking that pornography is the most harmful type of internet content. Based on my experiences with violent photos being searched for and found on a filtered system, I think our considerations should be deeper than that.
Ultimately, and idealistically, where do our priorities lie? Will a student be more emotionally harmed by being scolded for searching for violent photos or by being able to research drug prevention in various cities? Will a student benefit more from never being witness to a female nipple, or from being able to research breast cancer?
I'm not sure if it's getting harder to teach our students, but I am seeing hints that it's getting harder to let our students learn.