A follow-up on last week’s discussion about software art –
Came across this example when I was reading for the mid-term paper about film, algorithmic culture and time. Basically Gregg Biermann uses clips from Hollywood classics, processes them through mathematical formulas designed to create optical patterns. I thought this would make for an interesting negotiation between what we talked about: the original filmmaker’s intentions vs. Biermann’s interpretation/intentions.
The original drive away scene from Psycho (Alfred Hitchcock, 1960):
Gregg Biermann’s Spherical Coordinates (based on Psycho):
About Superintelligence (Week 8)
- What seems likely to you in the coming years for Superintellignce? Compare your view to Bostrom’s scenarios.
- How would you characterize the portrayals of AI in readings 1,2, and 3 – and how might they “matter”?
Bostrom seems to adopt a more cautious, critical view of Superintelligence suggesting our fate would depend on the actions of the machine superintelligence and that it would be problematic to control what the superintelligence would do. What was surprising to me, was how intrusive and available AI has already become to us – we often think of AI to take the form of a robot and behave like a human; Bostrom counter-illustrates this by providing us with examples that we use in our daily lives (e.g. Google search, Siri). Bostrom proposes 3 types of superintelligence (speed superintelligence, collective superintelligence and quality superintelligence). While the increasing pervasiveness of AI seems probable and possible within the next 20 years, superintelligence seems to be something that is still out of reach. At this moment, it seems as though existing AI have yet to reach our abilities of understanding language or recognise objects, as Bostrom suggests.
AI seems to be portrayed as a desirable yet potentially damaging entity. These ‘problems’ seem to echo issues we already face today, that is enabled by the technologies we have – e.g. phishing, identity theft/impersonation. The problem then, it seems, is to imagine what other kinds of problems that could arise with AI beyond the problems we already face with present technologies? This inability to imagine a future with AI beyond what we already know, could be what underlies our portrayal of AI – in the media, for example, AI is always perceived as the ‘other’ with the ability to destroy mankind (as highlighted in the online article). I feel a recognition that AI is already pervasive in our lives is important in our discussions of AI; a recognition that does not stem from fear of the unknown as often presented by the media.
