Time and Mutable Temporality: A Closer Look at Algorithms and Algorithmic Culture in Film

Abstract

Algorithms have become increasingly dominant and pervasive, warranting a need to scrutinise and study these technologies further (Finn, 2017; Hassan & Purser, 2007; Kitchin, 2016). Existing literature have explored how technologies have introduced a virtual dimension in our lives, often considering such relations in a spatial sense with limited study on temporality despite space and time being “indivisible elements” (Hassan & Purser, 2007). This paper seeks to shift discussions to examine the relation between time and algorithms in an attempt to become “time aware” (Sabelis, 2002), exemplified through an analysis of interactive film ‘Bandersnatch’ and borrowing concepts from New Media theorists Manovich (1995) and Røssaak (2011).

 

Invisible Hands: Algorithms and Time

As algorithms become increasingly dominant and pervasive in our lives, the need to scrutinise and decipher these technologies become more apparent (Finn, 2017; Hassan & Purser, 2007; Kitchin, 2016). Existing studies of algorithms – and more broadly information and communication technologies (ICT) – have explored extensively how such technologies have introduced and planted a virtual dimension in our lives, although much of these literature tended to consider this relation in a spatial sense (Hassan, 2007; Kittler, 1995). Often, ICTs are thought to have allowed for the “shrinking” of the world, where a person connected to the Internet can interact with another person across the globe or gain access to real-time news from another country (Crang, 2007 ; Kwak, Poor & Skoric, 2009). In other words, users “share a common space, a virtual space that both accept as being a real space, a real virtuality that has real-world effects” (Hassan, 2007 p.41). Naturally, in this instance, the notion of time comes into play as ICTs have allowed for this immediate exchange of information to occur. Despite the close relation between space and time, present literature have focused on understanding technologies spatially, with limited understandings of time.

Indeed, “we have cyberspace so why not cybertime?” (Hassan, 2007, p.41). He suggests this gap in literature is “rather strange” given that “space and time are indivisible elements” where “one makes no sense without the other.” This essay attempts to address this gap in literature, specifically rooted in an analysis of algorithms and how they have altered our perceptions of time. That is not to say, however, that existing literature have entirely ignored the temporal aspect in understanding technologies; these perceptions of time will be addressed with greater detail in later sections of this essay. What is significant in putting a spotlight on ‘time’ in our networked society, is to open our eyes to how our increasing use of networked technologies have accelerated the way in which we experience time at an unprecedented scale, thereby trapping us further in an environment where we may lose control and autonomy over our use and understandings of time  (Hassan, 2007; Hassan & Purser, 2007). As Hassan and Purser (2007) posits, the network society “annihilates space (and clock time) and has brought time (and speed) as a legitimate dimension of social inquiry to the fore.” To attempt an examination of time within the context of these technologies is to attempt to become “time aware” (Sabelis, 2002), perhaps moving us closer towards freeing ourselves from the clutches of the clock time, of modernity and of capitalism.

This discussion on rethinking time within the context of a networked society is timely, as we move into the era of “Web 3.0” where ICTs become “smarter” – some even liken this to an artificial intelligence assistant (Nath, Dhar & Basishtha, 2014; Technopedia, n.d.). With increasing complexities in technologies, it becomes even more imperative to understand how technologies have influenced our lives, especially in ways that are less visible to us. An algorithm is one such example – elusive yet pervasive. Finn (2017) highlights the power we, as users, have entrusted onto algorithms. “We imagine these algorithms as elegant, simple, and efficient, but they are sprawling assemblages involving many forms of human labor, material resources, and ideological choices” (Finn, 2017, p.7). Clearly, algorithms serve as a useful and fascinating example to better understand issues of time.

Given algorithms and time have both been rendered invisible – although not intentionally – in our increasingly networked society, this essay embarks on an ambitious process to understand how algorithms have altered our understandings of time and to be “time aware”. The first section of the essay attempts to answer the following questions:

  1. How do we presently think about algorithms and time?
  2. How can we rethink our understandings of algorithms and time so we may free ourselves from the workings of [clock] time in our increasingly networked society?

The second section of this essay closes in on the specific workings of algorithms in film – an area of study with limited research, and how they work to change our perceptions of time, drawing from Lev Manovich’s and Eivind Røssaak’s works. As Rossaak (2011) argues, “film and photography are no longer medium-specific qualities, but are rather two of the ways algorithms hide themselves” (p.191). The shift from analog to digital in film amidst a post-industrial society makes for a fresh and interesting analysis in this study of time.

 

Rethinking Algorithms

Algorithms are commonly defined and understood as sets of instructions to be carried out in a particular sequence to accomplish some task, usually to solve a problem (Algorithm, n.d.; Mackenzie, 2005). These algorithms are often embedded within softwares – a hidden logic (Manovich, 1995), thereby imbuing in this technology a sense of mystery and a level of power (Chun, 2011; Finn, 2017). As Kittler (1995) points out, “the so-called philosophy of the computer community tends to systematically obscure hardware by software,” suggesting programmers into today’s society are contented with managing technological complexities by hiding them from our view. This obscurity and complexity of algorithms many scholars have spoken of, in some ways, explain why time is difficult for us to understand. With such advancements in technologies, new time fractions – like the nanoseconds and picoseconds – have to be used to measure computers because they run so quickly. Time has quite literally sped up and it becomes more difficult for us to comprehend or perceive time within this technological context. “These are invisible moments that we can only capture mechanistically and mathematically” (Hassan & Purser, 2007). In other words, only the software we use to carry out other algorithms can measure these new time fractions; people like us can rarely comprehend these types of time which becomes problematic as we are swallowed into this intense, acceleration of time through our use of these softwares.

Beyond thinking about algorithms as “linear sequences of steps to be carried out mechanically” (Mackenzie, 2005), it would make for a more meaningful discussion of time if we expanded our understanding of algorithms not just in relation to other technologies, but attended to as object of analysis in themselves. Through an analysis of the Viterbi Algorithm, Mackenzie (2005) uniquely argues that algorithms should be “judged as embodying singular applications of human individual or collective intelligence.” In simpler terms, as Hassan and Purser (2007) suggests, the algorithm “brings the deadening logic of ones and zeros – the basis of binary code – to life,” where it rejects the notion of algorithms as mere mechanic repetition and establishes possible relationships between “things that are disjointed, by concatenating events in paths.” Much like the way Finn considers algorithms as “culture machines” which serve as “filters” through which we consume entertainment and news, Mackenzie illustrates this power of algorithms (2005, p.103):

Algorithms make the world they work in hang together in certain ways and not other today. They give weight to relations, and they treat relations as real by holding things together and by making some conjoined path of translation discernible.

Imperative in this manner of thinking is Mackenzie’s conclusion that the “mechanical clock entimes according to a preset and invariable rhythm” while the “computer-based time of the network is a microworld that is entimed by the irreducible traces of human intervention and the potentially unlimited experiences of duration that these may generate” (Hassan & Purser, 2007, p.16). Simply, time is relative and contextual. Likewise, though not specific to algorithms, Hassan (2007, p.15) argues “human control over temporal processes is possible in network time in ways that were impossible through the mechanical time of the clock.” Algorithms, like clock time, have long been assumed to possess an absolute and axiomatic quality. Since we have broadened our understandings of algorithms, rethinking time empowers us to expand our experiences of time beyond that of clock time, especially amidst today’s networked environment.

 

Changing Perceptions of Time

For many people, time seems to exist in the background and it is “something we deal with almost without conscious thought” (Hassan & Purser, 2007, p.4). To raise prominence and challenge dominant notions of time, we must first understand how these understandings of time came about and how such understandings have shackled us to our temporal experiences within a networked society. “Time is social,” where “the living body, nature, and culture came together in ancient societies across the world to form a diversity of relationships with time” (Hassan, 2007, p.38). This “social production of time” simply referred to how people and societies “created time” in giving meanings of duration, of growth and of decay. One example Hassan (2007) highlights in his essay is the Chinese practice of zuo yuezi where a woman goes through thirty days of confinement after pregnancy, that coincides with a lunar cycle. From a temporal perspective, he argues this is motivated by the desire “to create a cultural significance for the embedded temporality of pregnancy, to order it and bring it more directly under human understanding and therefore control” (Hassan, 2007, p.39). This idea of ‘control’ and ‘order’ shall be explored further in the second section of the essay using a specific example of a contemporary film.

The introduction of the clock can be said to diminish our control over time and limit our temporal experiences (Hassan, 2007; Mackenzie, 2005). We now follow “the rule of the clock” which Hassan (2007, p.40) refers to as “a mechanical abstraction that places time outside the immanency of human creation and experience.” Our experience of time has now been standardised; clock time has become the universal system of time for the benefit of modernity, industrialism and capitalism (Gurevich, 1976, as cited in Hassan, 2007). With the introduction of the Standard Time and “time zones” around the world at the 1884 International Meridian Conference, “people in modern societies were schooled from infancy that the numbers around a clock face measured the reality of absolute time, and therefore the experience of time had to be measured by it,” thereby undermining our control and autonomy over the social production of time (Hassan, 2007, p.40). Quite simply, we have surrendered the way in which we live to the clock, to the extent of being able to guess the time without needing to look at a clock – for instance, when the canteen becomes crowded with people three hours into our work day, we know it is lunchtime and therefore it is 1pm.

This becomes especially problematic in our networked society, where technologies enable us to be connected to others around the world and we are expected to be available at anytime and anywhere (Crang, 2007; Hassan & Purser, 2007). What this means for us is that if we continue to adhere blindly to clock time, we continue to grant time the power to dictate our lives, as technologies have led to the compression and acceleration of time. For example, I receive an email at 11pm at night in Singapore, from an American-based director I am working with – there is a 12-hour time difference. Here, ICTs have allowed for instant communication (compressing time and space) yet because I am aware it is 11am in America when they are working, I adhere to the rules of the American clock and respond anyway almost immediately. In Shove and Southerton’s (2000) words, “this sense of synchronisation and choreography” hold “the promise to help people cope with the compression and fragmentation of time. But in so doing they lock their users into certain practices and habits, at the same time requiring an extensive if routinely invisible supporting infrastructure with the unintended consequence of tying people into an ever denser network of inter-dependent … relationships with the very things designed to free them from such obligations” (as cited in Crang, 2007, p.77). Not only does this example illustrate the types of time generated by the network society: words like instantaneity, real time, and 24/7 (Hassan & Purser, 2007), it is also reminiscent of how technologies have drastically shifted boundaries between organisational and private life, thereby affecting ways we understand labour, organisation and management (Sabelis, 2002). By rethinking the way we understand time beyond that of the clock time, we may begin to regain control of time and spend our time better in this networked society. Time can be social again.

Crang (2007) highlights several ways in which we can consider time afforded by New Media. He suggests ICTs have altered the duration and pace of events, although he importantly notes that our experiences of time “are contingent for different people in different places” (Crang, 2007, p.70), further emphasising the relativity of time. “With many users accessing services electronically on the basis that it is faster, there may be a slowing down for those without such access” (Crang, 2007, p.71). ICTs have not only changed the way in which we use our time, they have also changed our sense and measures of time, “away from abstract external time (“I’ll be there at 9am”) to one embedded in activities or a relational time between individuals and tasks (“I am just arriving at the station, how far away are you?”) and from absolute space to relative space (“How far are you from me? The bar?”)” (Crang, 2007, p.76). This acknowledgement of how ICTs have changed temporal and spatial relations allow us to recognise and understand how we negotiate time today.

Imperative is an understanding of time as social and as relative. As Hassan (2007) rightly highlights, “ironically, it is with the dawning of the computer age – the source of temporal acceleration – that our serfdom, vis-a-vis the relationship to the clock, is beginning to change.” He envisions a promising future to “control” time once more through the temporal worlds created by ICTs, allowing a new engagement with time.

 

Algorithmic Culture in Film: Mutable Temporality

In today’s post-industrial society, film – having changed from the analog to digital – provides for a fascinating example in which we can understand time. While film is traditionally understood as broadcast media, film – in areas of production, distribution and consumption – has developed with technology to allow interactivity and a digital generativity, thereby qualified as a type of New Media (Manovich, 1995; Manovich, 2003). New Media today can be understood as digital data controlled by software, that can be reduced to digital data to be manipulated by software as any other data (Manovich, 1995, 2003). For example, a shot in a film can be stored as matrix data that can then be manipulated, according to additional algorithms, to change colour or resize. Similarly, Røssaak (2011) posits film is a way in which algorithms hide themselves and borrows from Galloway to suggest an “algorithmic culture” exists in film (p.190). Like in gaming, Røssaak (2011) argues digital editing software, like Final Cut Pro and Adobe Premiere Pro, is a “game” that is “all about finding the best algorithm,” where users can activate thousands of ready-made algorithms simply to transform the image.

Perfectly embodying Manovich’s (1995, 2003) understandings of New Media and Røssaak’s (2011) argument of film as an “algorithmic culture” is the interactive film ‘Bandersnatch’, recently released on streaming platform Netflix (Roettgers, 2018; Thomas, 2019). While the earlier paragraph focused at length on algorithms in film production, this example will illustrate how algorithms are also present in our consumption of film, thereby changing the way in which we experience time perhaps in ways we do not realise. Set in 1984, ‘Bandersnatch’ follows a geeky teenager Stefan who “sets out to turn a multiple-choice science-fiction book by the same title into a pioneering computer game that also presents the player with a series of choices” (Roettgers, 2018). This film was basically created and produced based on algorithms – a team of Netflix engineers built the company’s scriptwriting tool allowing creatives to “build complex narratives that include loops, guiding viewers back to the main story when they strayed too far, giving them a chance of a do-over” (Roettgers, 2018). Interestingly, the word ‘loop’ is often thought of in a temporal sense, suggesting an endless cycle of being trapped – as we have been talking about in our essay. According to Netflix (as cited in Roettgers, 2018), there are “over a trillion unique permutations of the story” although this also includes relatively simple choices users can make that do not alter the story. This is reminiscent of Manovich’s (1995) argument that new media follows the logic of individual customisation rather than mass standardisation in a post-industrial society where we see that ‘Bandersnatch’ exemplifies what Manovich refers to as “modularity” and “variability”. This fragmented presentation of the film very much mirrors our fragmented experience of time as we view the film, interrupted every time we are asked to make a choice.

Time is an important notion, especially in ‘Bandersnatch’, although expectedly less visible. Viewers have to make a decision within ten seconds or the algorithm decides for the viewer a ‘default option’ (Roettgers, 2018; Thomas, 2019). There seems to be a paradox in our consumption of this film – we are granted the power to decide for the character therefore dictating how much time we spend on watching the film in total, yet we are controlled by the time pressure exerted by the system. ‘Bandersnatch’ has challenged the way in which we relate time with film. The advent of the DVD, transcoding film from analog to digital formats, has allowed for viewers to temper with time in that we are able to pause, rewind and replay parts of the film – film no longer as a continuous experience. Here, the control of how we wish to experience time as we view the film is very much dependent on us. ‘Bandersnatch’ – or the algorithms built within the film, on the other hand, seems to have the upperhand in dictating our timed experiences.

 

To Infinity and Beyond: A Conclusion on Time

This essay has attempted to reimagine our perceptions of time and temporality amidst a networked society, broadening our understanding of algorithms. By examining the relationship between algorithms and time – both of which are often less visible, the hope is for users of technologies to be more “time aware”, therefore rethinking ways in which we may regain control over time. In this essay, I have explored multiple ways in which ICTs have changed our perceptions of time. Hassan (2007) and Crang (2007) perceive time as a social production, as contextual and relative, and suggest ways in which ICTs can expand our understandings of time to once again be considered ‘social’, thereby freeing us from the limits of clock time. The example of interactive film ‘Bandersnatch’ serves to illustrate the pressing need to think deeply about time in our consumption of ICTs, borrowing concepts from Manovich (1995) and Røssaak (2011). This essay aims to add another layer in the exploration of time and temporality in our networked society, with which we may, in the words of Hassan and Purser (2007) “interpret (and thus maybe have some agency and control over) the new world that digital networks have created and the (literally) new times that are being created with every network connection” (p.20). Surely, as more attention is placed on studying temporality in technologies, we can learn to renegotiate our relationship with time.

 

References

Algorithm. (n.d.). In Cambridge Dictionary.com. Retrieved from https://dictionary.cambridge.org/dictionary/english/algorithm

Chun, W. H. K. (2011). Programmed Visions: Software and Memory. Cambridge, MA: MIT Press.

Crang, M. (2007). Speed = Distance/Time: chronotopographies of action. In R. Hassan & R. E. Purser (Ed.), 24/7: Time and Temporality in the Network Society (pp. 62-88). Redwood City, CA: Stanford University Press.

Finn, E. (2017). What Algorithms Want: Imagination in the Age of Computing. Cambridge, MA: MIT Press.

Hassan, R. (2007). Network Time. In R. Hassan & R. E. Purser (Ed.), 24/7: Time and Temporality in the Network Society (pp. 37-61). Redwood City, CA: Stanford University Press.

Hassan, R., & Purser, R. E. (2007). 24/7: Time and Temporality in the Network Society. Redwood City, CA: Stanford University Press.

Kitchin, R. (2016, February 25). Thinking critically about and researching algorithms. In Information, Communication & Society, 20(1), 14-29. doi: 10.1080/1369118X.2016.1154087

Kittler, F. (1995). There is No Software. Retrieved from http://www.ctheory.net/articles.aspx?id=74

Kwak, N., Poor, N., & Skoric, M. M. (2009, November 17). Honey, I Shrunk the World! The relation between Internet use and international engagement. In Mass Communication and Society, 9(2), 189-213. doi: 10.1207/S15327825

Manovich, L. (1995). Language of New Media. Cambridge, MA: MIT Press.

Manovich, L. (2003). New Media from Borges to HTML. In Wardrip-Fruin, N., & Montfort, N. (Ed.), The New Media Reader. (pp. 13-25). Cambridge, MA: MIT Press.

Mackenzie, A. (2005, May). Protocols and the irreducible traces of embodiment: the Viterbi algorithm and the mosaic of machine time. In R. Hassan & R. E. Purser (Ed.), 24/7: Time and Temporality in the Network Society (pp. 89-105). Redwood City, CA: Stanford University Press.

Nath, K., Dhar, S., & Basishtha, S. (2014, February). Web 1.0 to Web 3.0 – Evolution of the Web and its various challenges. 2014 International Conference on Reliability Optimisation and Information and Technology (ICROIT). doi: 10.1109/icroit.2014.6798297

Nations, D. (2018, September 6). Is Web 3.0 Really a Thing? Retrieved from https://www.lifewire.com/what-is-web-3-0-3486623

Roettgers, J. (2018, December 28). Netflix takes interactive storytelling to the next level with ‘Black Mirror: Bandersnatch’. Retrieved from https://variety.com/2018/digital/news/netflix-black-mirror-bandersnatch-interactive-1203096171/

Rossaak, E. (2011). Between Stillness and Motion: Film, Photography, Algorithms [E-reader Version]. doi: 10.26530/OAPEN_431802

Sabelis, Ida. (2002). Managers’ Times. Amsterdam: Bee’s Book.

Technopedia. (n.d.). Definition – What does Web 3.0 mean? Retrieved from https://www.techopedia.com/definition/4923/web-30

Thomas, L. M. (2019, January). How long is ‘Bandersnatch’? You can literally get lost in this ‘Black Mirror’ movie. Retrieved https://www.bustle.com/p/how-long-is-bandersnatch-you-can-literally-get-lost-in-this-black-mirror-movie-15574442

Week 13_Wrap up.doc

Summarize Anderson’s critique of how others speak of digital journalism.

Anderson proposes a ‘sociological approach to computational journalism’ through six lenses – economic, political, field, cultural, organisational and technological. She suggests existing approaches to understanding digital journalism are problematic, stemming from what he calls internalist tendencies – i.e. considering problems of journalism scholarship from the perspective of the journalism profession, thereby arguing for a more interdisciplinary understanding of computational journalism. More specifically, Anderson suggests authors who respond to digital journalism from a perspective of the profession tended to overplay positive developments of the emerging computational journalism while overlooking the trade-offs.

Can you generalize her critique of digital journalism to a critique of “typical” AI narratives?  Can we then generalize her suggested approaches to digital journalism to AI in general?

Indeed we see how “typical” AI narratives have perpetuated an overly positive perspective of AI in our lives. Here, I refer to positive perspective of AI as exceeding human intelligence and having the ability to possibly one day overrule human beings. Discussions surrounding AI have also tended to centre trade-offs of developing and using these AIs in our lives. While I can see how Anderson’s critique of digital journalism could be generalised to refer to AI, I thought the problem of ‘internalist tendencies’ was less prevalent simply because these narratives of AI tended to stem from producers of mass media. Developers of AI, while they proposed possibilities of being “better” than human, are less likely to suggest a possibility of AI taking over the world in the near future. Further, discussions of AI seemed to always have been talked about in a more sociological manner – relating to labour for example.

Anderson’s suggested approaches would make for a meaningful discussion in talking about AI, given this technology is emerging and increasingly prevalent in our lives. Our discussion revolving bias in AI is one such example.

Week 11_On Creativity.doc

Flaherty suggests some key ideas on creativity, AI and augmentation in his article I found were provocative. With reference to the Google DeepMind paper, suggesting the building of a Generative Query Network (GQN) to “build spatial representations about a scene without relying on labelled training data or domain knowledge.” This example could be a first step for AI to be self-aware within  a spatial locality, moving towards AI having autonomy for creative self-expression. This video provides a simple explanation of how GQN works:

Flaherty suggests existing literature are insufficient to properly evaluate AI based on our limited definitions of what constitutes ‘creative’, though the other readings reflecting on explainability, self-reflection and agency have attempted to redefine our understandings of creativity.

Guckelsberger, Salge & Colton (2017) argue the need to redefine our understandings of creativity/agency and adopt the perspective of the system. They suggest an understanding of “why (are computational creativity systems being creative)?” would allow for an attribution of intentional agency thereby leading to a stronger perception of creativity. Here, the system’s intentional agency is its capacity to have a purpose, goal or directive for creative action, and the paper suggests a system’s inability to account for its agency validates our disapproval of its creativity. They adopt an agent-centric approach – to look at the value of actions and artefacts from the perspective of the creative agent (i.e. the AI) instead of that of an external observer. While I think this paper has put forth very bold and useful suggestions to better understanding creativity and agency in AI, it seems to me difficult to imagine how we might go about adopting this approach. How do we begin to get answers from the AI as to why they are being creative? Perhaps it might stem from a lack of programming knowledge but it seems to get answers to such questions, the human programmer must have written a code or installed a programme that would allow for such answers to emerge? If we adopt this approach (assuming it is possible), then are we dismissing the agency of its creator?

Bodily & Ventura’s paper on CC systems suggest explainability as an effective meta-aesthetic (an aesthetic for evaluating aesthetics) for autonomous creative systems. They define aesthetic to encompass all qualities necessary to judge a piece of art as ‘creative’. Crucially, Bodily and Ventura highlight the need for a balance: too little information will not satiate the observer’s desire to understand while too much details suggest the agent is carrying out predefined instructions thereby not creative. While the paper acknowledges Guckelsberger, Salge & Colton’s non-human-centric approach, Bodily & Ventura suggest it is necessary to recognise creativity occurs within a context; creativity emerges in the interaction between an agent’s “thoughts” and a socialcultural context.

Having read these articles, I feel I am more confused/conflicted in thinking about AI as having agency/creativity. On one hand I agree with the earlier paper it would make more sense to understand AI’s meaning-making processes from the AI’s perspective (after all, how can we impose the human way of thinking upon a machine), but to imagine AI as an autonomous individual entity seems highly impossible and frankly a little bit frightening.

Week 10.doc

Boden defines ‘creativity’ as an ability to generate novel and valuable ideas, further deconstructing the notion of ‘novelty’ to reflect psychological (“P-creative idea” as new to the person who generated it) and historical meanings (“H-creative idea” as something that has never occurred in history before). Boden’s idea of ‘creativity’ in computers is refreshing though not entirely new – Hollywood films for example often thrived on P-creativity and were rarely entirely new. Boden also discusses three ways in which novel ideas may be produced – (1) combination, (2) exploration and (3) transformation. Interestingly, Boden suggests combinational creativity as most difficult for AI to model given the need for access to vast amounts of data and being linguistically/culturally sensitive. I would have assumed AIs would face most difficulty in creating a transformation given that is most difficult for a human being to achieve – and humans are the ones who programme these AIs.

Week 8.doc

A follow-up on last week’s discussion about software art –

Came across this example when I was reading for the mid-term paper about film, algorithmic culture and time. Basically Gregg Biermann uses clips from Hollywood classics, processes them through mathematical formulas designed to create optical patterns. I thought this would make for an interesting negotiation between what we talked about: the original filmmaker’s intentions vs. Biermann’s interpretation/intentions.

The original drive away scene from Psycho (Alfred Hitchcock, 1960):

Gregg Biermann’s Spherical Coordinates (based on Psycho):

 

About Superintelligence (Week 8)

  1. What seems likely to you in the coming years for Superintellignce? Compare your view to Bostrom’s scenarios.
  2. How would you characterize the portrayals of AI in readings 1,2, and 3 – and how might they “matter”?

 

Bostrom seems to adopt a more cautious, critical view of Superintelligence suggesting our fate would depend on the actions of the machine superintelligence and that it would be problematic to control what the superintelligence would do. What was surprising to me, was how intrusive and available AI has already become to us – we often think of AI to take the form of a robot and behave like a human; Bostrom counter-illustrates this by providing us with examples that we use in our daily lives (e.g. Google search, Siri). Bostrom proposes 3 types of superintelligence (speed superintelligence, collective superintelligence and quality superintelligence). While the increasing pervasiveness of AI seems probable and possible within the next 20 years, superintelligence seems to be something that is still out of reach. At this moment, it seems as though existing AI have yet to reach our abilities of understanding language or recognise objects, as Bostrom suggests.

AI seems to be portrayed as a desirable yet potentially damaging entity. These ‘problems’ seem to echo issues we already face today, that is enabled by the technologies we have – e.g. phishing, identity theft/impersonation. The problem then, it seems, is to imagine what other kinds of problems that could arise with AI beyond the problems we already face with present technologies? This inability to imagine a future with AI beyond what we already know, could be what underlies our portrayal of AI – in the media, for example, AI is always perceived as the ‘other’ with the ability to destroy mankind (as highlighted in the online article). I feel a recognition that AI is already pervasive in our lives is important in our discussions of AI; a recognition that does not stem from fear of the unknown as often presented by the media.

week 6.doc

David Berry’s Rip, Burn, Copy summarises the history of the FLOSS (free/libre open software) movement that has led to increased visibility of opposing interests between corporations and individual programmers. Individualistic programmers will exit companies with their skills and knowledge will a corporation needs to control knowledge and information. Berry highlights the difficulty in policing flow of information due to software’s high labour-intensity.

He also outlines issues surrounding the ‘hacker ethic’ where sharing, debate and criticism were encouraged, with a brief history of Stallman’s General Public License (GPL) meant to ensure a communal system of software for co-operation and sharing.

 

In Matthew Kelly’s All Bugs Are Shallow: Digital Biopower, Hacker Resistance, and Technological Error in Open Source Software, Kelly takes on a perspective that is against (1) media’s simplified depiction of FOSS as anti-capitalist, and (2) open-source software as another form of corporate exploitation.

Kelly suggests the synthesis of production, identity and belief systems in the FOSS movement exhibits Foucault’s biopolitics, where “individuals take on an economic existence as something more than just mere labor”. Kelly also addresses the misconception of hackers as “anti-authoritative anarchists”, suggesting they “possess an enthusiasm for programming computers as an end in itself”. He suggests a hacker’s biopolitical existence as a productive apparatus to allow informational and cultural production, existing as a personal enterprise. By creating codes, hackers also create the ideological significance attached to it.

A key point in Kelly’s article is the notion of power and resistanceHe talks about the cathedral – a finished software released to public only after extensive testing done within a select circle of cathedral architect, and the bazaar – a marketplace encouraging modifications amongst users, developers and original programmers. Kelly suggests bugs are signs of progress and should not be seen as a sign of developers’ incompetence. Hackers, who are more willing to use experimental code or beta versions released by developers, thereby help produce information, cultural standards and strengthen open-source ideology.

Reading Kelly’s article reminded me of the debate surrounding Anonymous – the international hacktivist group widely known for its various DDoS cyber attacks against governments, institutions and corporations. Some things I started thinking about –

  • Anonymous is a very visible, modern example of a hacker group that has been portrayed as “anti-authoritative anarchists”, but if we think about it, is what they’re doing very wrong?
  • e.g. Operation Payback (2010): Anonymous launched a DDoS attack that shut down Aiplex’s (an Indian software company contracted with film studios to launch DDoS attacks on websites used by copyright infringers) website for a day
    • Anonymous’ press release – “Anonymous is tired of corporate interests controlling the internet and silencing the people’s rights to spread information, but more importantly, the right to SHARE with one another. The RIAA and the MPAA feign to aid the artists and their cause; yet they do no such thing. In their eyes is not hope, only dollar signs. Anonymous will not stand this any longer.”
    • Is this wrong? Are they just “strengthening the open-source ideology”?
    • As a filmmaker myself, I can understand why MPAA (Motion Picture Association of America) would go to such extend to ensure IP rights since movies are expensive business, but at the same time I believe it is right to share. I’m not sure whether in this context, Anonymous is encouraging the sharing of software as we have been talking about, or something else?

 

Kleiner’s Telecommunist Manifesto covers the political economy of network topologies and cultural production. In the introduction, he suggests wealth and power are intrinsically linked, and only through the former can the latter be achieved. Only the self-organisation of production by workers can eliminate exploitation.

Kleiner argues the Internet has been reshaped by capitalist finance into an inefficient client-server topology and suggests a need for an alternative that would provide means of efficiently allocating collectively-owned material wealth required to build free networks and free societies. He outlines the conditions of the Working Class on the Internet and suggests change requires the application of enough wealth to overcome the wealth of those who resist such change. I question the possibility and plausibility of such a suggestion. He describes the Povery of Networks and idealises the notion of a community of peer producers that can grow without developing layers of coordination because they are self-organising.

In contributing to the critique of free culture, Kleiner suggests copyright as a system of censorship and exploitation. (Perhaps we might compare this with Kelly’s article and the example of the Anonymous group?)

The article further suggests producer control is merely creating a read-only culture, thereby destroying the vibrancy and diversity of creative production. Creative Commons is accused of being an anti-commons perpetuating privitisation in a capitalist environment under a misleading name.

Comment: Kleiner seems to adopt an overly critical view of the Internet and copyright laws within the capitalist context. When considering the notion of sharing in software, I can’t help but think of Casilli’s article last week where Casilli highlights the pervasiveness of computing and usage of mobile technologies have led to the failure to recognise invisible digital labour. If we adopt the ‘hacker’s ethics’ and emphasise sharing, would we still be ignoring the inherent inequality that is embedded within our study of software? In other words, those who are not equipped with skills/knowledge about software cannot contribute to this sharing economy thereby still rendered powerless? Or they are unaware of their contributions as users, simply because they do not have the skills/knowledge?

Week 5.doc

Casilli’s  Digital Labor Studies Go Global: Toward a Digital Decolonial Turn

Casilli’s article provides a comprehensive introduction into understanding the types of platforms where the issue of digital labor remains concealed. He argues that the increase in digital economies outsourcing tasks to developing countries have allowed for new global inequalities and proposes to render visible these invisible workings of digital labour. He highlights how pervasive computing and usage of mobile technologies, so accessible to us, have resulted in the failure to recognise invisible digital labour.

He outlines four platform ecosystems: on-demand platforms, microwork platforms, online social platforms and “smart” platforms. Through a brief analysis of these platforms, Casilli exposes the way in which these platforms exploit and continue to exploit human labour, though at differing intensities and visibility.

On-demand Platforms
  • Customers connect with independent goods/service providers to allocate resources in real-time
  • Issues of insecure working conditions, lack of guarantees and income volatility
  • “Immaterial” labour performed by all users, i.e. data-intensive tasks
  • Insufficient online performance leads to discontinuation of service for all users

Perhaps one example that we can all relate to in Singapore is the use of food delivery services like GrabFood / Deliveroo. Even the use of dining reservation apps like Chope can be an example here.

Microwork Platforms
  • Crowdsourcing services where recruiters are matched with workers to perform small, repetitive, and often unskilled tasks.
  • Human-based computation, obtained via microtasks, bridging the gap between computer processing and human judgment

E.g. When we turn on predictive texting function on our smartphones, the accuracy of predicting text in future becomes more accurate. Or facial recognition technology.

Online Social Platforms
  • Based on communities of producers and consumers exchanging cultural goods
  • Content production vs. active participation

I think we have all participated in this one way or another. It would make for an interesting discussion to bring in Finn’s discussions of the gamification of social media here that keeps people coming back.
This algorithm that is so embedded and invisible to us has turned life into some sort of a game. For instance, though arguable, Instagram has become a game where users have 24 hours to view one’s Instastory before it disappears. This mechanism keeps people turning on the app multiple times in a day and some of us may fail to realise this time restriction has encouraged us to consume social media in that way.

“Smart” Platforms
  • Behavioural data produced by connected objects and smart environments enabled by the Internet of Things

E.g. Apps-Connect worked with NYP to create a smart floormat where caregivers can monitor the elderly. If the elderly enters a kitchen but does not exit after a period of time, it could indicate he/she has fallen down and send a notification to the caregiver.
Perhaps this could be a counter-example to what Casilli refers to those who have chosen to use these technologies as those with high level of agency and self-determination. It would make for an interesting discussion to bring up ethical issues (of vulnerability, of surveillance) here or digital labor issues where caregivers are expected to be on standby 24/7.

 

Finn’s Coding Cow Clicker: The Work of Algorithms

Finn brings up some interesting points that suggest social gaming has continued to blur the lines between play and work (“escapism masquerading as efficiency”), between reality and virtual. He suggests gamification is implicitly used to manipulate the players. Bogost adopts a critical view of gamification where it should be referred to as “exploitationware” for abusing human susceptibility in a capitalist world thereby highlighting their role as “algorithmic culture machines”.

Cow Clicker as a satirical response to the mindless repetition of these social games clearly illustrate how gamification manipulates players. In this video with specific reference to the Cow Clicker game, Bogost talks about how Facebook “brazenly” gives user data to the developers. Even through this example we see how people have contributed to this blurring of distinction between what is real and what is virtual. It is scary. Finn suggests games on social media platforms “demonstrate the power of algorithmic systems to reorder human lives” and I have to agree. It is not just games that does this; the manner in which these social media platforms are built encourages this. On Snapchat/Instastory, for example, you have 24 hours to view your friends’ snaps before they disappear.

Finn also talks about the “interface economy” where algorithms shaping the user interface were adapted to change the way in which we consume things.  Interestingly, Hong’s discussion of the prosumer would add another layer of understanding how this interface economy have also influenced us as a prosumer.

A key question Finn brings up in his paper is: who is motivating these changes between algorithm gamification and the marketplace, and what are we “sharing” in the sharing economy?

  • It would be meaningful to root this discussion in a specific context: e.g. Netflix and Bandersnatch – by allowing consumers to affect the fictional character’s decisions in the film, what are we “sharing”?

Finn also suggests we are asking the interface layer to make ethical and cultural decisions on our behalf, highlighting the high level of trust we have invested in such algorithms. E.g. we allow other parties to know where we are through location-dependent services.

Finn also talks about “cloud” as the invisible space in which we exchange information, also rendering invisible the amount of labour that goes into maintaining and operating these clouds. I thought it was utterly disgusting to refer to human beings as “animals” (p133), thereby bringing to mind the question: is our world of software desensitising us to emotions? Is it contributing to a loss of our humanity?

Other key ideas he addresses that I find simply adds on to the discussion of cultural and ethical issues include the idea of moral machinery and artificial artificial intelligence.

 

Hong’s Game Modding, Prosumerism and Neoliberal Labor provides an interesting discussion where the game industry and the act of game modification allows the perpetuation of norms, thereby trapping gamers in a cycle of repeated game modding. Hong aims to understand how neoliberalism restricts and enables one to achieve the goal of the player’s capacity to share power, instead of maximising one’s wealth or rights. He tackles teh question of who benefits from game modding? I thought this point would be interesting to expand upon in class, beyond the context of game moddings – our consumption of the Google Search Engine for example.

week 3.doc

In My Mother was a Computer, Katherine Hayles launches into a lengthy discussion about the Regime of Computation, a continuation in the tradition of Turing’s work by adopting the view that computation starts with a limited set of elements and a small set of logical operations. These components can then be built up with increasing levels of complexity. Hayles seeks to understand the interactions between human language and programming language, beyond understanding this relationship within the “linear causal” model of thinking.

She talks about “intermediation” as a way of understanding the complex and entangled problems of interactions between the worldview of code and worldviews of speech and writing.

Hayles expands extensively on Wolfram’s Principle of Computational Equivalence, who proposes that systems found in the natural world can perform computations up to a universal level of computational power. She addresses the three claims in Wolfram’s work and proceeds to question whether computation should be understood as a metaphor pervasive in our culture, or whether it has ontological status as the mechanism generating the complexities of physical reality?

I thought this would make for an interesting discussion in class, as Hayles talks a lot about code as ontological.

 

Nick Montfort’s book focuses on a single line of code in the BASIC program that reads: 10 PRINT CHR$(205.5+RND(1)); : GOTO 10, as a way of understanding how computing works in society. He mentions this seeks to avoid fetishising code by deeply considering the context within which code functions. Interestingly, Montfort proposes code is ultimately understandable, contrary to alternative views of code as this invisible entity humans cannot understand. Montfort’s analysis of this particular code is an application of Manovich’s 5 Principles of New Media – he talks about numerical representation and modularity. Montfort even refers to Manovich’s ‘transcoding’ principle when he talks about porting.

Montfort points out the subjectivity even within the act of porting, where the final output of the program is basically determined by what the programmer chooses to prioritise that will determine the qualities of the final port.

Potential discussion: how can we use Montfort’s discussions in examining modern-day example?

 

Also, I found it weirdly satisfying to watch a video running the 10 PRINT code –

[On a Commodore 64]

week 2.doc

Manovich’s (1995) 5 Principles of New Media compares key differences between old media and new media. He elaborates extensively on new media as numerical representation (principle #1) and highlights a key point: new media follows the logic of a post-industrial society – that of individual customisation, rather than mass standardisation. This notion of customisation is further facilitated by the modularity of new media (principle #2). Indeed, we see this everywhere: blog tools like wordpress.com/wix.com allows us to customise our webpages by putting together multiple media elements (like images and texts); viewing a cooking video on YouTube would trigger a string of ToTT baking tools advertisements. Even Netflix’s latest interactive film Black Mirror: Bandersnatch (2018) allows viewers to decide the film protagonist’s course of action.

The principles of automation, variability and transcoding are, according to Manovich, allowed for by perceiving new media as numerical representation and as modular. While he expands extensively on each principle, I thought the following points resonated with me more, as a digital native:

  1. On thinking of the Internet as one huge distributed media database: the problem of the 20th century is to figure out how to find an object that already exists somewhere?
    • Who decides what we see first, on these big search engines?
  2. On transferring more power to the user when making choices: implicit is the transfer of a moral responsibility to us, as users
  3. A mutually-influential relationship between the computer layer and the cultural layer [of new media]
    • E.g. how data structured on Google affects what we know of the world (articles on wikipedia appear first; is this then the truth?)

Also because my interests lie in film, I came across this website that visualises film data (that exemplifies Manovich’s point on Modularity).

 

In Cutting Code: Software and Sociality (2006), MacKenzie highlights the (unfortunate) difficulty of defining “software”. Already, this expands what I had earlier understood software to be – as a complex string of codes existing behind our hardwares – i.e. in MacKenzie’s words to be an understanding of software as possessing secondary agency. He argues that the function of a software and how it works must be understood within “its constitution through and through as a code” – i.e. a recognition that software works a certain way within a particular context. He suggests software as a constantly changing relation that affects agency between people and machines in varied ways.

While I understand that MacKenzie attempts to explore software as a social object and process, I found it difficult to fully comprehend this idea and apply this to a contemporary example. Would it be relevant, for example, to bring in Dr. Mark Andrejevic’s study on how the act of data mining (i.e. using software in this specific way) in this Big Data era produces new forms of identity sense-making (a social process)? Is MacKenzie saying what Chun is saying in that software is ephemeral and therefore has to be understood within a particular situation?

 

Chun (2011) interestingly refers to “software” as magic in Programmed Visions; Software and Memory. She suggests all new media objects can mostly be reduced to “software”; that software is a “magical source”. I find it intriguing how there seems to exist an almost Godly sort of admiration in referring to software as magic, yet implicit is a recognition of defeat to fully comprehend software. Chun talks about source code as fetish, that allows us to visualise what is unknown. This reminds me of how I visualised software:

matrix-code
Opening sequence in The Matrix (this code was revealed to be Japanese recipes)

Chun suggests software is hard to understand because it works invisibly and is fundamentally ephemeral. She argues software embodies a “way to navigate our increasingly complex world” and computers are “mediums of power”. Perhaps this would be an appropriate time to bring in Kelly Gates’ discussion of facial recognition technology as a seemingly necessary pursuit in today’s governance without considering potential social problems? I thought this might be an example that illustrates this power Chun talks about, yet suggest potential issues when entrusting technology with such power. In Our Biometric Future, Gates suggests a loss of individuality and a rise of a logic of standardisation in facial recognition technology, where the algorithm in the technology concretises what we understand looks like a “male” or a “female” or an “Asian” person. I thought it important to recognise that biases, preconceptions and prejudices can be baked into the code of the software, where they continue to operate in opaque ways. This leads us nicely to talk about Finn’s (2017) What Algorithm Wants.

 

Finn refers to this system of algorithms as a “culture machine” which powerfully serve as “filters” through which we consume entertainment and news. He scarily points out how algorithms have become the centre of most high-value tech companies. He, like Chun, compares codes to “a magic spell” like how algorithm is a “method for solving a problem”. I’d love to believe in a future Finn imagines: a beautiful, idealistic collaboration between and amongst human and computational assemblages to do great things. But can this truly be a possibility given these algorithms are only accessible by conglomerates? Or when we continue to consume these algorithms (through products of these conglomerates)?

In addition, Laurent’s article on artificial intelligence where he debunks some myths of artificial intelligence (AI) [1] which I thought relates to what we discuss about algorithm. Most important is his emphasis on the creator of the algorithms in these AI; how we should be mindful these algorithms were created with knowledge that “always serves the interests of some over those of others” and it would be unwise to view algorithms as neutral ways of solving our human problems.

Something that has caught my attention (and that of a lot of my peers) lately is an article discussing a possibility that Facebook’s 10 Years Meme challenge is actually a way to improve its facial recognition technology although a Facebook spokesperson said it was entirely a user-motivated trend that became viral. I thought this would make for an interesting example about algorithms.

A news website I recently came across – observer.news – describes itself as a “news network in Singapore powered by artificial intelligence”. I wonder what this means?

 

On the point of magic, I was reading about AI in changing the filmmaking industry and came across this –

“Any sufficiently advanced technology is indistinguishable from magic.” – renowned science fiction author Arthur C. Clarke.

 

 

[1] Laurent, C. S. (2018) In Defence of Machine Learning: Debunking the Myths of Artificial Intelligence.