Agora

(Part I. Part II in progress…)

Agora won several Goya Awards for production (set, costume, cinematography, special effects, etc) and its screenplay. The film has received mixed reviews in the United States. Roger Ebert wrote in the Sun-Times,

This is a movie about ideas, a drama based on the ancient war between science and superstition. At its center is a woman who in the fourth century A.D. was a scientist, mathematician, philosopher, astronomer and teacher, respected in Egypt, although women were not expected to be any of those things.

while V.A. Musetto complained in the NY Post,

The story revolves around Hypatia (Rachel Weisz), a real-life philosopher whose proclamations about the sun being the center of the universe ran counter to religious beliefs of the time.

There are a few exciting battle sequences and the sets are lavish, but mostly the film meanders aimlessly for more than two hours.

Then Patrick Goldstein interviewed the director for the Los Angeles Times during Cannes,

At several points during the film, he takes us swooping up and away from Alexandria, allowing the audience to see the world from high above, as if watching from the cockpit of a satellite orbiting the Earth. I asked him why he chose that perspective. “For me, this wasn’t just the story of a woman, but the story of a city — and a civilization, and a planet — so I wanted to find a way visually to capture that. When you see things from a distance, you can see how relative things are. The ideas that so inflame people up close, that feel so scary and menacing, they look very different when you see them from a different perspective.”

The words “center,” “revolve,” “meander aimlessly,” “perspective,” “relative,” as well as other astronomical vocabulary pepper the reviews with various denigrating puns or praises, depending on the critic’s general assumptions concerning film aesthetics. I find it amusing that the reviewers – much like the besieged students in my favorite scene – come close to discovering a correct interpretation of the film’s message, but ultimately fall back on preconceived, fallacious notions.

Read more of this post

Advertisements

Definition Machine

Procrastinating again, I have been reading “Mind as Software of the Brain” by Ned Block (available here). Here’s a fun passage:

Defining a word is something we can do in our armchair, by consulting our linguistic intuitions about hypothetical cases, or, bypassing this process, by simply stipulating a meaning for a word. Defining (or explicating) the thing is an activity that involves empirical investigation into the nature of something in the world.

One can’t really argue with that. Block uses the example to point out that the Turing test examines intelligence in the first sense (whether an observer would call a machine intelligent) and not in the second sense (whether an investigator would find a machine to be intelligent); and he sets up the rest of his argument nicely with this simple distinction.

I bring up the example – and not the argument – because I want to take it on a tangent. Let’s say we build a machine that actually is intelligent. Would it define an item “j” intuitively or through an investigation?

The definition machine which defines “j” intuitively might match “j” against a list of known items to find the corresponding definition; where there is no match, the machine adds “j” to the list and provides a new definition (perhaps in terms of the other definitions). The definition machine which defines “j” following an investigation of the item may describe its concrete properties, its uses, and its material composition; the machine decides which features rank most prominent and pegs the definition of “j” to those features.

Interestingly, it’s hard to think of these as two separate machines – in part because our own minds utilize both processes – but examples of each exist. A search engine such as Google might represent an intuitive definition machine (Google simply finds things and puts them in a list, ready for near-instant reordering), and the Mars rover might represent an investigative definition machine (the rovers plot their own course across the Martian landscape while analyzing soil, sending results back to Earth). Now, imagine you put Google search algorithms on the Mars rovers: you might end up with an artificially intelligent definition machine. It won’t be able to do much besides make known certain facts about its environment, but it will do this extremely well.

This is a fun, slightly sci-fi topic that I think I’d like to return to.

“Why We Cooperate”

An excerpt from Why We Cooperate (MIT Press, 2009) by Michael Tomasello:

First, humans actively teach one another things, and they do not reserve their lessons for kin. Teaching is a form of altruism, founded on a motive to help, in which individuals donate information to others for their use…

Second, humans also have a tendency to imitate other in the group simply in order to be like them, that is, to conform (perhaps as an indicator of group identity). Moreover, they sometimes even invoke cooperatively agreed-upon social norms of conformity on others in the group, and their appeals to conformity are backed by various potential punishments or sanctions for those who resist. To our knowledge, no other primates collectively create and enforce group norms of conformity. Both teaching and norms of conformity contribute to cumulative culture by conserving innovations in the group until some further innovation comes along.

I’ve always been an avid reader of Tomasello, and his latest book shines among the best works of comparative anthropology. But what I’ve loved most about Tomasello are his forays into social philosophy, as seen in the quotation above.

Here’s just one way to think about Tomasello’s framework for Why We Cooperate: how does Tomasello’s enumeration of two separate processes for cultural production alter your thoughts concerning education? I think there’s a tendency amongst educators – especially English teachers – to believe in certain kinds of teaching as a means to certain brands of conformity. For instance, I often hear opponents to standardized testing say, “if we create the expectation for a set of ‘correct’ answers to literary questions, then we will end up with hopelessly automated cultural awareness.”

But that argument relies on a continuity between teaching and imitating, something refuted, at least in part, by Tomasello’s research. We end up with a conflagration of processes, when in fact we  might understand cultural production better if we separate, in our minds, learning from conforming. We might achieve this separation by stipulating that (1) learning involves only information transfers, and (2) conforming includes behavioral imitations alongside a variety of implicit and explicit beliefs or judgements concerning the imitated behaviors.