This course will examine contemporary trends in theorizing digital media with particular attention given to software and the video game as new media texts. The semester will be divided into two units. The first unit will address theories of code and software. We will discuss the concept of “software studies” in relation to traditional media studies, and investigate how code and software can be examined as aesthetic and political texts. Through an examination of code and semiotics, software and ideology, and critiques of particular software programs, we will lay a theoretical foundation for the investigation of our second unit: video games. Following the rise of the “serious game movement” we will investigate the emergence of political games, persuasive games, simulation games, newsgames, art games, etc., in relation to the theoretical Concepts we developed while analyzing Software and code.

Thursday, February 28, 2008

Can Code Make You Feel?

The Mateas and Montfort article provides an alternative view to the reading by Geoff Cox. In "A Box, Darkly” Montfort and Mateas analyze code aesthetics through the obfuscation of code and weird languages. It even sites the Cox reading when it says, “the aesthetic value of code lies in its execution, not simply its written form.” I found it interesting that both articles started off with common ground, but took completely different approaches to the study of aesthetics. Although the Mateas article gives credit to the execution of code, it mostly focused on its written form. In the beginning, it talks about “beautiful” code, which is something I can understand from my limited experience with coding. When I think about the aesthetics of code, the concepts of elegance and grace immediately come to mind. I share the authors’ opinion that certain styles of coding can be a “genuine pleasure to read.” However, I feel like any discussion about aesthetics should also cover the emotional attributes that something may have. It bothered me that the authors decided not to go into this subject because it brought a really good question to mind. Can code elicit sentimental feelings or emotions? At first glance, I think it can because of how code is constantly paralleled with spoken language. After thinking about it though, I can’t come up with any examples of how written code could make me feel anything. Instead of going into this, it talks about what weird languages and obfuscated code say about code as an aesthetic form. The authors feel that investigating these two phenomena as unexplored areas of aesthetic code is important. I guess I’m just more interested in getting at the heart of how code can be included within a discussion of aesthetics in the first place.

The other article, “The Aesthetics of Generative Code,” is fixated on the executionable aspect of aesthetic code. I feel like the authors were completely ignoring the elegance and grace that a programmer could bring to his/her creations. In their discussion of aesthetic code, they held up code against poetry to identify its aesthetic attributes. I have a problem with this because code isn’t poetry and doesn’t have to be in order to have aesthetic value. The authors stretched definitions so that they could prove code was structurally like poetry and thus an aesthetic. They didn’t, however, actually address the emotion or message that poetry is all about. They danced around that by talking about how both have similar functions and “execute” in similar ways. While both, for example, may play with the structure of the language they’re written in, the result of that play is completely different. Poetry plays with language to get across attitudes, feelings, and ideas. One can play with the language of code all day and still produce the exact same result.

For class tomorrow, maybe we can spend some time talking about whether or not code can make people feel something.

The Purpose of a Language

In our discussion on the multiple aesthetics of code, we mainly focused on form and function. A few made arguments that when analyzing aesthetics of code, one shouldn't take into consideration the aesthetics of English (or, however it is humans communicate — we just happen to use English), but instead the fact that this code is made for a machine — a machine which doesn't share our same notions of form and structure.

Throughout this discussion, however, I feel something important was overlooked — namely "purpose". What's the purpose of code? Why do we have programming languages?

In the early stages of computing history, the sole purpose of code was to control a machine. Because of this, it made sense to have the structure of the language match the structure of the machine. Obfuscation of code was simply a bi-product of this fact, seeing that your instructions had to match the specific features of the machine — not a human being.
As time went on though, and as more scientists made advances in mechanical computing, more machines would pop up. IBM had their machines, Cray had theirs, and Digital Computing Equipment shipped a whole bunch of PDPs, so writing software for each of these machines meant bending your brain to convert your thoughts into however each individual machine "thought". That is, each time you wanted to think of something for the computer to do, you had to not only think it through yourself, but then also think of it in the way that the computer understands data — whether it meant having a certain series of switches flicked to the "on" position,  or specific holes punched out in paper tape, you had to convert your human logic — each time — into whatever arbitrary system a hunk of hardware happened to use. 
The purpose of computers was to alleviate our own minds of repetitive and tedious computation, but instead, we found ourselves enslaved to them at this point. We were not only forced to think how one particular machine thought, but every particular machine that anyone ever made.

Clearly you see the problem.

The form and function of mechanical computation was undoubtedly preserved, but the purpose of mechanical computation was lost.

This led to the creation of possibly one of the most important programming languages in existence — C. The idea behind C was to bridge the two "modes of thinking" (that of human, and that of machine), without sacrificing the human's ability to harness the power of the machine. Two employees at AT&T Bell Labs decided that they tired of their previous choice of programming language's inability to harness more of their machine's features, and so made C to improve upon it. Yes, we're writing instructions for machines, but ultimately we are humans. We want to be able to think and map out ideas in a way that makes sense to us, and our peers. Plus we want it to work efficiently. And on a multitude of machines. Does that sound like too much? Or perhaps overly demanding? Well we made the darn machines right? We're smart folks, why not get it right (or at least try to)?

Anywho, back to aesthetics. I believe that to say the aesthetics of the language should be determined based on the structure of the hardware its used on is to completely ignore the purpose of computing in general. We did not invent computers to enslave ourselves to a lifetime of conforming to each individual machine's architecture. We're smarter than that. We're humans — remember? We invented computers to liberate our own selves by allowing a way for our own human thoughts to be performed quickly by an external entity. Therefore, if the aesthetics of the language doesn't coincide with the same set aesthetics we would apply to instructions written for another person (or in other words, if the code is not easily read and understood by the human writing, reading, or editing the code), then it doesn't matter that your code is beautifully in sync with the underlying machine. You've added yet one more hurdle a human mind must clear in order to get that machine to do what he or she means. Yes, this is my opinion, but I believe this counters the purpose and goal of mechanical computing. Until the day when machines write code for other machines, the code should support the aesthetics of its (human) author.

-JTF


Wednesday, February 27, 2008

My efforts to put these GDC things under a cut just seem to break HTML on the board, so I'm just going to post them straight up, and my apologies if it gets too spammy.

IBM's Bluegrass tool was developed to deal with the social drawbacks of having teams of programmers who work in different facilities in different parts of the world. The program creates a pseudo-social environment, you might say, where employees (through their avatars) can travel around and meet each other in a virtual world that's specifically designed to appear welcoming: it's a sunny, open meadow. The employees all have avatars to represent themselves, and these avatars can 'speak' in word bubbles typed out by the user.

Each avatar in Bluegrass travels around with information about the employee who uses it, such as their company contact info, past projects they've worked on, and what they're doing right now. Mousing over the avatar will cause this information to appear; right-clicking on this information turns it into a game, such as a jigsaw (of their employee photo ID) or crossword (of their personal information), which all nearby users can see and interact with. This creates a kind of dinner party mood, with everyone pitching in to offer random comments or help with the puzzle. In the process, they learn about the person who's been clicked on.

Users can move from the meadow into different environments. The one we were shown in the demonstration was a boardroom - of sorts. In this boardroom, you can create tangible speech bubbles that can actually be picked up by anyone and moved around the room. Since the room is divided into 'Yes,' 'No,' and 'Maybe' sections, you can imagine it makes the decision-making process much more clear.

  • programmers, at work, put their heads down and work on code, but occasionally stop to get food, etc., so they see people from their team and other teams
  • now many teams are scattered over different areas, don't get to see each other
  • IBM created Bluegrass, giving every worker an avatar and bringing them together in a virtual world that looks like an outside park
  • live RSS feed tells user what others are working on, where they are going
  • mouse-over people's avatars produces a picture of them
  • first theme, therefore, is visualization
  • top-down 3rd person view chosen over first-person because it creates a common perspective, more useful in brainstorming sessions
  • can type up a suggestion as a tangible chat bubble, e.g. "make sure network cable is plugged in;' there's a boardroom screen where you can move these chat bubbles around into no/maybe/yes categories
  • replicates the experiences of a boardroom from afar
  • note: the IBM guys are actually connected to this program right now, and can't help but goof around with it during the talk
  • features for socialization are next!
  • each avatar emits little bubbles with personal information about the player, e.g. their profile from IBM's internal 'Blue Book' with contact information, list of projects worked on in the past ... can be potential ice breakers
  • any object, including this contact information, can be right-clicked and transformed into a game (e.g. jigsaw puzzle) that different players can work on together
  • on-screen quotes from IBMers: 'haha I got the first piece!' 'w00t!' 'grrr.' Goofs.
  • players create their own games, esp. when empowered by avatars, e.g. two of the programmers racing each other to the top of the hill during code breaks

Classic Coding and New Coding: a Dialectic?

Mateas and Montfort ("A Box, Darkly") define the aesthetic we often use when thinking of code as a "classical aesthetic." The typical programmer expects to listen to good code like "a symphony, because every instruction [can do] two things and everything [comes] together gracefully.” Classic coding aesthetics attempts to make it easy for the human and computer to read code simultaneously in a straightforward format whilst conveying as much information as possible.

Code in this sense is beautiful much like classical literature: code is easily readable, flows logically in chronological order, and contains multiple layers of truth. Stylistic rules reign supreme, highlighting the "importance of the human reading the code" as well as the syncing of man's logic with that of a machine's. We end up with canonical pieces of classic code that serve as the basis for all future coding endeavors.

We can use some of the theoretical framework from "The Aesthetics of Generative Code" by Cox, Mclean, and Ward to look at the implications of this classic code aesthetic. Perhaps this type of code aesthetic harkens back to a pre-Kantian aesthetics, or aesthetics as objective beauty? Code in this sense tries to restore the language and thought of an earlier time when logical rationalists could solve the mysteries of the world and empiricists could discover the underlying structures of the universe. This code might act as a reaction to how contemporary language often complicates meaning, where the artist's craft (i.e., elegant, beautiful code) restores meaning. Classic code form does not factor in subjectivity for it believes an "if-statement" means exactly an "if-statement."

Mateas and Montfort, looking to turn classic code on its head, bring up obfuscated code and esoteric languages as counterpoints to elegant code . These two fields complicate the former belief of code as a pure language that sheds all subjectivity. Such odd languages "refutes the idea that the programmer' s task is automatic, value- neutral, and disconnected from the meanings of words in the world." This code layer bears as much burden as any postmodern language system. Thing like obfuscated code and enigmatic programming languages critique and deconstruct the supposed naturalism of classic code. They force us to struggle with the idea of code as a restorer of objective beauty by making "the familiar unfamiliar.. [we] wrestle with the language in which it is written." The form of code is destroyed and all that is left is a function of the code. An "if-statement" is not just an "if-statement," but instead can emphasize, weigh, or make ambiguous a conditional statement.

Various practices combat elegant code, from the simple action of removing all whitespace from code to the creation of languages like Malboge that actually oppose the man-machine, buddy-buddy system. Code no longer sits there to let itself be molded, but fights back against the programmer!

Are these the new aesthetics of coding?

Cox, Mclean, and Ward seem to combine the two overarching methods of classic coding and new together. Sure, they argue that like the anti-art Dadaists, obfuscation succeeds as the anti-code by rejecting "the aesthetic conventions of perfection and order, harmony and beauty, and all bourgeois values and taste." However, code, down in its very heart, is a functional language meant to execute itself. We need to understand that code is ambiguous, but code is never left to "chance arrangements, [for] attention to detail is paramount when it is encountered in written form." Instead, this new aesthetic of coding tempers classic coding, but does not destroy classic coding (in much the same way anti-art did not completely wipe out classic art practices).

We are welcome to write as much obfuscated code as we want to remind us that code has no natural aesthetic form; nevertheless, we are reminded by unexecutable perl poetry that code is meaningless if it does not function. Thus, as is stated often in the essay, form and function must go hand in hand- as if to say the lessons of obfuscated code can add subjectivity to the formerly objective practices of classic coding.

Links of interest:
Funny Unix Commands (referencing the bilingual nature of code)
http://www.bga.org/~lessem/psyc5112/usail/library/humor/funnycommand.html

xkcd comic (with a "shoutout" to Montfort)
http://xkcd.com/380/

Questions related to "Giver of Names:"

  1. If form and function should go "hand in hand" according to Cox, Mclean, and Ward, is it required that we see the code of "Giver of Names" to understand the function of the artwork?
  2. Or are we able to successfully reverse engineer how the artwork abstracts reality, similar to what Mateas and Montfort call a "reading" of code?
  3. The blackboxness of the artwork alludes to how this artwork attempts to represent objects, but perhaps the form of the code is much more advanced or meaningful than the code's visible function?

The Mateaus & Montfort article brought up different issues about analyzing code and actually interacting with it that I have felt confused about. In the essay they write that, "code has been made legible to people." Now, I haven't had much experience with code, but when I do see it I have found it quite illegible, as I am sure that most individuals do. So when they say "people," I ask myself, 'what kind of people?' Are only those familiar with the language of code invited to read code? If not, how can others benefit from code?

They then refer to 'weird' coding, something that I had never even heard about, and they explain it as languages made to make any program difficult. After reading this, a person who is not so familiar with readig code may become even more confused. Why would they create a code just to confuse people and make their job of reading code even more difficult? However, when they actually give examples of some weird coding, such as the language Shakespeare, I became more involved with its analysis than I have become for 'traditional' i.e. not 'weird' coding. The comedic aspect, the puns, and the difficult challenge of reading weird language suddenly manifested itself as an art form. To be honest, I was a bit put off when Mateaus and Montfront reffered to code as possibly being poetry, of being elegant. When they quote, and I requote Knuth as saying, "plodding and excrutiating to read, because it just didn't posses any wit whatsoever. It got the job done, but it's use of the computer was very dissapointing," I was shocked. Why is the elegance and poetic feature of code more important than getting the job done. Is not codes main function to get the computer to get the job done? Clearly, for many it is important that it be both aesthetically pleasing and get the job done. But then we have the weird languages of coding. These languages may be entertaining to read, and aesthetically pleasing, but they "are not designed for any real-world application or normal educational use; rather, they are intended to test the boundaries of programming language design." So, if the weird languages fulfill the aesthitic requirment, but do not fulfill any practical use, where do they stand? What becomes more important to the reader of the code? The aesthetic quality or its ability to fulfill a real world comandment?

It became apparent that to Mateas and Montfront the aesthetic quality is quite important for they say that in order to appreciate the code we must be able to see it. The essay "The Aesthetics of Generative Code," by Cox, McLean, and Ward, seems to slightly contradict this idea. They argue that we must be able to "sense" the code in order to really appreciate. Although they fnd the aesthetic quality of the code important, they establish from the beginning paragraph that, "the aesthetic value of code lies in its execution, not simply its written form." They also find our interaction with code important, but they seem to imply here that the code must be executable, otherwise it may not be wholy appreciated. The codes function is also important. They say, "to separate the code and the resultant actions would simply limit the aesthetic experience, and ultimately limit the study of these forms." This make much more sense. Although being able to see the code is crucial, being able to sense the code in all of its forms and performities enables us to become more absorbed into the code and its qualities. We may learn to read it, and enjoy that challenge without neglecting an aspect.

Monday, February 25, 2008

Last week, as I mentioned in class, I was away at the Game Developers' Conference where, among other things, there was a summit on serious games. I went over there for a couple of sessions, and have my notes typed up for all of them, but I'm going to limit myself to posting one at a time because apparently Blogger has trouble with these things.

That said, the first session I attended was picture-perfect for this class: linking software development to serious games. Well, except that it's more from a developer's perspective than our MCM view on things; it was still an interesting session. This one has two parts, but I'll only cover one of them for today.

These two guys worked for a company whose business is "building gaming into day-to-day work." They presented a product of theirs called "Bug Hunter" which offers incentives for employees to report and repair broken code. They explained the basic philosophy behind their game, which is to entertain and reward people but NOT challenge them as it gets them to work on some aspect of their job. There's a pretty complicated interplay going on here, as they also have to consider competition/cooperation among players, ways to curtail cheating, and the overall impact on the company.


Bug Hunter: A Productivity Game for Software Testing

  • Windows Defect Prevention Team: Robert Musson and Ross Smith
  • building gaming into day-to-day work
  • philosophy: let everyone win, simple prizes, align the game to the job, simple games
  • let everyone win: getting people into the game more important than challenging them
  • simple prizes: enough incentive to play but not enough to cheat
  • align the game to the job: get ppl to focus on core elements fo their jobs
  • simple games: simple interfaces using existing tools
  • purpose: motivate behavior, facilitate education, (foster) team working
  • design: set the goals, set the rules, determine organizational impact
  • issues: unhealthy competition, conflicting organizational messages, conflicts between players and non-players
  • Bug Hunter: person who discovers a bug enters information, others vote on its location, resolution, root cause, etc.... quality of information determines how many points they get
  • sample prizes: latte coupons --> prize booklets --> prize lottery (allows anybody to win, even if they don't play often) ... also give players accolades, e.g. expert, hero, legendary
  • cheating is not really productive, as the program doesn't recognize it (and it's so hard to falsify a bug that you may as well spend your time actually finding one?)
  • if you want to test a certain area, e.g. wireless, you can add point multipliers for reports on that kind of code
  • http://www.defectprevention.org/Pairs.aspx : vote on various game elements, incorporates Wisdom of Crowds (Surowiecki) and Thin-Slicing (Gladwell) ... HINT: try the candidates one, it's silly

If it ain't Brecht, don't fix it

In Ramsay and Rockwell's essay, they bring up an interesting and as yet undiscussed point for us to turn our finely-tuned powers of analysis towards: the issue of the Brechtian in code, or, what reminds us in code that we are in a representation of reality, and not in reality itself. Is this everpresent in code? Is it not code's very own specific telos?

Yeah, probably, I'd say to both these questions. Ramsay's quotation of Abelson is here particularly telling: "'programs must be written for people to read, and only incidentally for machines to execute.'" This points us to an awareness of code as representation for reality, for the realness of the machine for which we write, as well as an implicit understanding that code is necessarily a representation, a mediation or even a translation. It is a representation of this 'reality' which we desire to change. I want that bit to turn on. I want my screen to show iTunes. I want my browser to 'scroll.'

We mediate these, our desires, through an interpretive language written for an audience that is never only one, but always many. We cannot write or talk just for the machine, as to do so in it's most pure fashion is impossible, unless you're that kid from Heroes. We've got to write for other people to understand so that someone somewhere can go in and say "OK, this is what this is doing in my representational view of reality. I have no comprehension aside from a theoretical one of the exact nature of the electrical currents running through this circuitry. Good thing this code is a way for me to figure it out!" Code, in this way, is always a Rosetta Stone to a language more than dead to us and always completely inaccessibly 'in-the-real-world.' To the good Brechtian's horror, though, it seems as though this foregrounding of representation has become so given, so taken for granted, so unequivocally natural in every area from computer science to the layman's ability to understand how computers work, that the very act of foregrounding representation has itself faded into the background oblivion of 'the obvious,' thus closing itself off to any further debate.

I know I'm reading code, Brecht. I am never just 'reading' when I read code the way I just 'watch' a movie like Transformers. Duh. The foreground of representation is explicit, not hidden, and therefore ever unable to be analyzed for there is nothing to 'draw out' or 'unpack.' Representation is the purpose of code whereas communication is the purpose of language. Though, we need communication to actually make the representation work. We do not talk to the machine, we shape reality through it. The only curious part here is that the representation is itself explicitly the telos of code (and we all know it), but the act of representing is the implicit underlying mental process occurring in our minds lying uninterrogated.

So I guess in a way, I disagree with the authors that code is writing and can be separated from the end-computer executable function. Code is writing AND translation, and never just one or the other. We cannot merely analyze it as if it were a kind of writing. When we code, we want an understandable (to other people) symbolic system AND we want the computer to just follow our script.

Which brings me to the most interesting part of the essay:

ramsay: It says here I'm supposed to smile. I don't want to smile.
rockwell: Very clever. Come on, follow the script.

Sunday, February 24, 2008

One thing I found a bit disappointing about Hayles' essay was her strict adherence to theories of speech and writing put forward by Saussure and Derrida, when a lot of what she writes seems to hold particular relevance to Barthes' Mythologies (itself largely influenced by Saussure's theories). What struck me specifically was that her discussion of the structure of signifiers and signified in code closely resembles the structure of myth that Barthes proposed. For Hayles, "voltages at the machine level function as signifiers for a higher level that interprets them, and these interpretations in turn become signifiers for a still higher level interfacing with them. Hence the different levels of code consist of interlocking chains of signifiers and signifieds, with signifieds on one level becoming signifiers on another" (45). This process by which signifiers become signifieds is reminiscent of the way in which signs (consisting of signifier and signified) become new signifiers in the "second order semiological system" of myth. Hayles' also discusses the ways in which the layering of code helps to naturalize its own processes, similar to the way in which myth masks over its own history and presents itself as nature and common sense; "the more the worldview of code is accepted," she argues, "the more 'natural' the layered dynamics of revealing and concealing code seem. Since these dynamics do not exist in anything like the same way with speech and writing, the overall effect... is to validate code as the lingua franca of nature" (55).


The way Hayles' structures the functions of signifiers and signifieds in code holds some key differences with the ways in which they function in myth for Barthes. For example, while Barthes' theory accounts for mainly two layers of the signifier-signified process - meaning and myth - Hayles proposes that this process occurs throughout many more layers within the computer and in two directions (low level to high level to low level, and so on). Nevertheless, the similarities still exist. So this begs the question, ARE these dynamics as unique to code as Hayles' claims? What are the significances of these similarities? I would argue that the layering of code and the dual directionality of the signified-signifier relationship could be applied more broadly and metaphorically - that, in some ways, computers as we experience them today are something like myth-machines, constantly constructing and deconstructing ideologies and offering them up as common sense. At this point that may seem an obvious statement, and I suppose it could be applied to just about every medium and every technology. But the argument that this layered process of signified turning into signifier is fairly literally acted out within the hardware of the computer itself, in the form of voltages, binary, commands, etc., is something I'd never considered before.

Note: I haven't read Barthes in a while, so I apologize if I've horribly oversimplified his arguments...

Wednesday, February 20, 2008

Manovich rather brashly claims that “the software artist makes his/her mark on the world by writing original code.” He seems to insinuate that this “original” act is superior the mere poking of the postmodern media artist’s attempts to break down and interpret “media” itself. In claiming that “an artist who samples/subverts/pokes at a commercial media can never compete with it,” Manovich constructs what seems to be a problematic binary; original code is good, deconstructing commercial media is bad/dated.

Manovich overlooks that fact that software (an interface for or an abstraction on top of hardware) is primarily a commercial media. A few shining examples (Firefox, Linux, etc.) aside, the majority of software is proprietary and developed by large, commercial organizations (Microsoft, Apple, Sun, Facebook…) While the contemporary “software artist” that he celebrates as the new romantic is undoubtedly an important figure the need for critical analysis of the new media of software and interface is still very important and the role of sampling/subverting/poking at media is still paramount. If media art and media artists are fundamentally concerned with breaking down and understanding the media of our lives, it seems that the prevalence of [new] media in our lives demands the attention of [new] media artists. Works like Auto-Illustrator, Carnivore, and Webstalker are all commentaries/deconstructions/subversions of commercial and/or governmental software.

In fact, in many ways the real value of Manovich’s software artist is in examining the very fabric of digital media, code.

Separately, Florian Cramer’s discussion of Composition 1961 No. I, January by La Monte Young and her examination of James Joyce were fascinating to me. The way in which Joyce packs volumes of information into single sentences and even non-words always resonates with me in contrast to Manovich’s claim that the modern artist is “genius who creates from scratch, imposing the phantoms of his imagination on the world,” particularly considering the density of cultural references in Joyce’s work. It often seems that Joyce imposes the phantoms of the world (through references to Shakespeare, Aristotle and thousands more) onto the minds of his characters.

As a final note, the discussion of literature and musical scores as seminal pieces of software art makes me wonder about the role of theatre and scripted action in relation to our discussion of performativity. Like code, a script literally does what it says, when run through the right hardware (a theatre, trained actors who have the script memorized). Just a thought.

I don't know

I thought I understood the point that was being made about all computer art being reducible to binary code, but I'm confused by some of the latest stuff. Cramer and Gabriel seem to think that it's a mistake to look at digital art as a media art in the sense that analog representations are media art, because this focuses rather on the arbitrary image on a computer screen that represents the collection of ones and zeros that is "really" the work. It would appear to me that this is missing the point--isn't computer art interesting because of the different processes that lead to similar results, i. e. visual representations on a monitor, verbal or otherwise? So we have Joyce creating a collage of words, some from different languages, with several different levels of meaning. I guess the idea is to play with meaning, and see the different ways it can shift around--there were those who thought Finnegan's Wake was complete nonsense, but meaning has certainly been found in it. How does the concept of "meaning" apply to the poem Gysin and Burroughs made with an algorithm?

Anyway, this is a pretty cool thing: http://kotaku.com/358459/pc-psychic-controller-hits-this-year
This is probably going to be a failure commercially, but it says a lot that this kind of thing is already being mass-produced. Apparently this machine can tell apart different thoughts to execute different functions. I wonder how far away that is from being able to represent an internally visualized image on the screen?

Thursday, February 14, 2008

Wednesday, February 13, 2008

Capture as Surveillance

While Surveillance in a "Big Brother" sense is easy to visualize and identify the negative implications of, the Capture model seems to achieve the same level of surveillance with regards to notable moves on the internet (items purchased, conversations through email, websites visited). Is the fact we do not see the physical face of a "Big Brother" the main reason we as a society do not object to the capture/surveillance capabilities of the internet? Are most people simply enamored with the abilities to communicate on the internet and not thinking of where the information they give can be logged? While most people are aware of viruses and what spyware can do, they do not think of those in power of social networking sites (facebook, myspace) and the "private" information they post on them. While it becomes more and more normal to (and strange not to) conduct business and socialize on the internet, it becomes more and more normal for all of these activities to be logged. Since these logs can be monitored and accessed by someone in power, the Big Brother seems to exist in the modern capture model.

Surveillance, Control, Capture & Why We Are Becoming Computers

It feels like a big jump in conceptual frameworks to move from interrogating the mere presence of software, to reflecting upon how we are mapped and controlled by technology at large. That said, Philip Agre's approaches to control in the media culture we live in does a great deal to demonstrate just how blurred cultures and computing have become (recalling again Lev Manovich's cultural and computer layer).

What Agre's argument? In a few words, Agre seems to be updating a Gilles Deleuze concept of "Control Society" for the digital, computational age. His update could be called "a capture society" which is contrasted not only with the Deleuze theory (though not explicitly) but also with the surveillance culture, that Agre attributes to Bentham's Panopticon, but again not to Michel Foucault the critic who famously theorized the panopticon as a culture-wide paradigm of control. The glaring oversights of attribution aside, Agre argument is relatively convincing. For each of his arguments, he provides real-world examples that though dated (circa 1994) a re persistent in our culture.

Chiefly, Agre discusses a sort of technophobia that persists in a society that is growing more and more invested in technology, mostly in computing. His analysis, as other analysis before his own revealed (p. 743) that as computers collect more and more information on human subjects, the rising level of security begins to cause anxiety and fear in mankind. As Agre points out, the general rise in computing and informatics related to computing has challenged pre-existing (and by this I mean pre-computer) concepts of privacy, and largely trampled them underfoot.

Here there seems to two questions we should ask. First, what is privacy? Who gets to define it? Are there inalienable standards of privacy that cannot be trespassed upon in human society? Or as Agre himself points out, many times, are humans simply naturalized to ever-shifting standards of privacy.

The second question would relate to the technology that is felt to be invasive. Is it possible for a technology (from the Greek techne for art, craft, or tool) to transgress boundaries of human privacy? Because if that's true, it would seem to follow that mankind has begun to attribute a sort of consciousness to technology, no longer seeing as a tool-craft, inanimate and non-intelligent, but instead as a sort of sciential entity. This may be because in Agre's capture society, technology seems to be a mediation of another human being. Such that all technology archives its experience and can be made to 'betray' its master but providing a record of use and vision. And this would not just be for personal computers. On a larger scale, lots of technology (cellphones, iPods, GPS devices, televisions, cars, etc.) "phone home" to check in with some other entity (human or non-human).

In this way, Agre may be pushing for an understanding that along a teleos of human society and attempts at structuring it, man has moved from watching (surveillance) to directing (control) to archiving (capture), all in the attempt to further regulate how man relates to one another and the artificial structure of governing to which he relates. Along this trajectory, it would seem that we are growing more and more controlled the more we relate and become related to technology. "A 'Computer' - understood as discrete, physically localized entity- begins to lose its force"(743). And we become the computer.*

Which explains the latent paranoia.

- - - - - - - - - - - - - - -
*To allude to Katherine Hayles and "How we became Posthuman." The prologue alone of which feels really relevant to this discussion.
- - - - - - - - - - - - - - --
Another few thoughts

1. Is surveillance not a form of capture?
This was the question haunting me throughout the reading. It would seem to me that Agre only differentiates the terms for the same of his argument and that they are not empirically separable. Nevertheless, Agre's boxing off of the term surveillance (which would seem to operate extensively in capture and vice-versa) seems problematic. I wonder if we the readers also consider the terms separable, and if not, why not?


Though it may seem like trite consideration, the very construction of this post (and by extension the blog, the computer I am using, my operating system etc.) are deeply ingrained in what Philip Agre argues is a system of capture, and implicitly, control.

Tuesday, February 12, 2008

I Dis-Agre

In “Surveillance and Capture,” Agre makes the distinction between surveillance and capture. Writing that “surveillance is a cultural phenomenon,” he argues that the surveillance model of privacy derives from historical experiences such as secret police. The alternative, the “capture model,” he argues, is predicated on linguistic metaphors for human activities as well as structural metaphors.
Towards the end of his argument, he writes (speculatively) that according to Ciborra’s theory of “transaction costs,” information technology, when “applied accordance with the capture model,” by accelerating the reduction of ambiguity in market-interaction, can reduce transaction costs through defining more clearly relationships between economic actors (753-754). It can also reduce, he argues, information costs because of the “grammars” that it can impose upon an organization’s activities (which, he writes, structure the relationship among the organizational members). What exactly is Agre’s theory of political economy of captured information and commodifed information? As information becomes a commodity within a market economy, as Agre writes, it is possible to think of captured information as a commodity. He then writes that captured information is simultaneously product and representation of the human activities on which it is imposed (755). Capture, he writes, by imposing “previously unformalized activities” prepares them for the transition to market-based relationships.
Earlier in his essay he raises some interesting questions about truth and information, writing that information is presumed to be true because of the historical way in which computers have been used, not because of any existing/real properties of computers (745). What is the relationship between truth, commodified information, captured information, and the political economy of captured information?

Kittler's Reality Machine

While both of Kittler's articles for Tuesday touched on the failings of computers as a universal representation system, I will keep this post limited to the essay "There is No Software" as it focus solely on digital representation and does not wander off into the field of visual theory.

First off, what does "There is No Software" mean. Essentially, software is a slave to hardware. All developments following the building of these programmable hardware systems simply abstract from the electric pulses that run through a machine. As Kittler points out, the engineers that took these natural materials (silicone and oxide) to build the computer in essence created everything that we do now: 1 or 0.

"The last historical act of writing may well have been the moment when, in the
early seventies, Intel engineers laid out some dozen square meters of blueprint paper.... This manual layout of two thousand transistors and their interconnections was then miniaturized to the size of an actual chip, and, by electro-optical machines, written into silicon layers."
(Kittler, 1)

Kittler then writes about our society using this machine as a representation medium. His argument states that by following the building of the computer, we are a part of this computer's technological system. This system, first theorized by Turning, allows for a collection of machines to communicate with one another endlessly. The binary system would thus classify as our "language"- superseding hardware, software, and allowing for universal representation (a Universal Touring machine).

Such "Universal" implications pave the way for the Church-Turing hypothesis. This hypothesis says "nature itself a universal Turing machine" (Kittler, 1). With a fast enough computer, we could theoretically map and simulate all processes. While Kittler argues that such a computer will never exist, the computer still holds weight in regards to many human-technological practices. All software, writing and other digital media technologies depend on hardware since they are simply higher level layers of binary, or "far reaching chains of self-similarities" (Kittler, 2).

In this scenario, the computer hardware is the end all be all. It is "the linguistic agent ruling with near omnipotence over the computer system's
resources, address spaces, and other hardware parameters" (Kittler, 2). The burning of silicone in transistors gives hardware material agency as well. The computer reduces everything to "signifiers of voltage differences" (Kittler, 3). In an effort to deal with the postmodern world, a world that lacks meaning, the computer establishes itself as the meaning maker.

Kittler argues that we mistake this binary world of the computer as our own natural world, and we are therefore forced to submit to technology, like to the "books and bucks" that Western civilization has submitted to before. Some realize this and enjoy the subservient role (Turning liked to read Hex code) while others need the GUI to mask the onus they carry for the machine. Hardware is our universal language, and hence we forget it and think of software as our universal controller. The computer makes us value work that reduces tasks, thoughts, and concepts to the simplest algorithm possible. Commerce, art, and culture rely on hardware to establish meaning, and that leads to belief in the computer logic as our nature.

The mistake that we can mirror reality in the computer falters on the assumption that we have Turing's computer, "a machine with unbounded resources in space and
time, with infinite supply of raw paper and no constraints on computation speed" (Kittler 5). We cannot place some things in binary form- namely nature. Programmability works when everything has "some notation system" which reality itself doe not. Kittler says computers are great for representing anything... as long anything is representable within boolean logic. Even the basic elements that make up these discreet machines (oxide and silicone) are prone to logical errors: "there is electronic diffusion, there is quantum mechanical tunneling all over the chip" (Kittler 6). Nature is complex, connected, and it would take an impossibly fast computer just to simulate reality.

But....

Kittler leaves room for a machine to fill the Church-Turing hypothesis. A machine that is non-programmable and focused on "maximizing noise," not reducing itself to a basic set of binary numbers. Although the machine would take algorithms, it would "work essentially on a material substrate whose very connectivity would allow for cellular reconfigurations" (Kittler 6). Such a machine would be capable of representing reality in all its complex forms.

Questions:
1. I assume Kittler's proposed machine would have no software to speak of, not even our imagined software. But how exactly would "cellular configurations" be purely hardware? Is he talking about something like chemical/biological machines? Aren't machines just incomplete representations? You would need a machine that could represent reality to work as stated.

2. The following quote challenges our notion of the originator because of our reliance on the algorithm: "In other words, the value of a message is the amount of mathematical or other work plausibly done by its originator, which the receiver is saved from having to repeat" (Kittler 4). Do we have any concrete examples of this? Or perhaps even a re-worded definition?

3. Most importantly: how do we now define software? Are you convinced software is just an abstraction of hardware? How might things like social networks change Kittler's reliance of software on hardware?

Monday, February 11, 2008

Is this a post?

As Kittler’s “There is No Software” still continues to confuse after multiple reads, I’ve decided to lay out the framework of what I believe he seems to be saying before I tackle my own questions and opinions.

Kittler says that through a “system of secrecy”, “perfect graphic user interfaces, since they dispense with writing itself, hide a whole machine from its users. Secondly, on the microscopic level of hardware itself, so-called protection software has been implemented in order to prevent "untrusted programs" or "untrusted users" from any access to the operating system's kernel and input/output channels”. If I am reading Kittler’s non-writing to any semblance of correctness, this premise that everything and its processes are hidden and obscure results with the operator having no understanding of what is actually going on as he types into his computer. This ignorance results in the operator losing ownership over what he has written- in fact there is nothing to own at all, as the operator cannot touch nor understand the computer memory’s transistor cells, “written texts- including this text- do not exist anymore”. In addition, Kittler seems to be saying that we must emphasize hardware- the corporeal, the electronic signals, the mechanical- over software as out ignorance over the relationship between software and hardware leaves us dangerously vulnerable to those few (and those corporations) who do understand.

There are a couple of things that I feel inclined to disagree with here. Firstly, “There is No Software” seems to me like another death of the author article which is not new in terms of postmodern literary criticism. However, after spending some time with the line “we do not write anymore” and “written texts- including this text- do not exist anymore”, Kittler goes beyond just the death of the author, he is claiming the death of literature and the actual process of writing itself. Does new media theory render literature obsolete? Does the obscurity of the internal processes of computers and their hidden programming cause operators in turn to become obsolete and ignorant? Do we really have no authorship over the activities of our computers?

If we believe Kittler’s statement that because we don’t understand how our computer works, because software is veiled in layers and layers of protected secrecy, then we don’t own or understand our work, then how about the fact that I have no idea how electronic signals work in my own brain to form these thoughts and how they are then coded into further signals that trigger muscles I am unaware of: do I not then own my own thoughts and muscle movements?

In all, I think “There is No Software” is more sensational than productive. Kittler makes grandiose statements that are ringed with condescension and expectancy, as if we should already know what he is saying.

Well, I'm quite dizzy. I tried looking up the Church-Turing thesis that was mentioned in the "There is No Software" article, but Wikipedia is no help at all. Anyone have a dumbed-down explanation they could share? I'd appreciate it. Without knowing what it is, I have a hard time figuring out how Kittler's using it in terms of that article.

Bearing in mind that I may have missed one of his maint points, I keep trying to draw lines with Kittler and I keep coming up short. If we can reduce software to the point that there is no software anymore, how do we define hardware any differently? I want to say that the hardware is where the action ultimately happens, but even that's kind of arbitrary, because there are yet more layers of technology behind the hardware - and they're as impenetrable to me as software or Church-Turing theses.

I don't find it very productive to reduce a computer all the way back to Mother Nature, so I'd like to have a line drawn somewhere, but I don't think that Kittler does that. That's why the argument fails for me: if I'm going to draw lines arbitrarily, I may as well start with software. Is there a better way to delineate between the two, or have I misunderstood Kittler's reasons for not recognizing software in the first place? It's an interesting idea. 

Sunday, February 10, 2008

If anyone's looking to exercise their brain a little (or a lot), try Notpron. Look at the clues in the image and accompanying text to figure out how to get to the next image. The riddles get harder as you get farther along in the sequence (hint: after the first few, there's a lot of source code and HTML manipulation involved). Enjoy!

Wednesday, February 6, 2008

We should do this



Well, perhaps code does have a life of it's own

Coderly Texts

Nanaca Crash - I got 3999.51m. Dammit! .41m short!

In reading Adrian Mackenzie's "Introduction: Softwarily", I was reminded a bit of my previous readings on Readerly/Writerly texts and the ideas of Work vs Text by Barthes. The line that did it for me was "consumption is not a passive activity but a highly complex and variable process" (8). Indeed it is, and according to Barthes any form of reading/consumption is also a act of writing/production; when talking about books he describes how a work that is read is assembled into something greater than simply the words on the page by the reader who receives: a text. In such a way, not only does the author of a text have agency in writing something, but the act of reading also has consequences in creating something new in the reader, giving the reader agency.

Now I feel Mackenzie's ideas about Agency in software fit in nicely with Barthes's theory on texts, works, authors, and readers, but in some ways is also problematic because of the nature of code. Mackenzie describes how both Originators and Recipients both have agency in creating the repercussions and use of a software. Eventually, he admits that the barriers between originator and recipient sometimes collapse into one, much like Barthes describes how the reader is also a producer of meaning (and later how Jenkins talks about fanculture grassroots production).

Now I haven't really shut up in class about how code is socially formed and forms society, and its interesting to note how originators create software with a specific use in mind (their own agency), while recipients use the code in their own way (changing or adopting it, applying their own agency. Sometimes the code is structured around a model or prototype (again, the imitated object has exercised agency on the programmer, code, and user who all have the prototype culturally ingrained in them). However, sometimes unintended effects from the code that are neither a result of the coders or users, nor do they imitate the prototype they are modelled on; things such as a bug, crash, or memory dump are things that lie in the agency of the code. What's interesting to note here is that while Barthes admits that texts are influenced by "prototypes", in a sense, the executability of code gives agency to the very text being studied.

Where lies the "authorship" of code's agency? Are originators responsible for the actions code takes that are unintended and unexpected? Or are users responsible for "misusing" the program (although their agency lies in using the program any way they want, even taking it apart and rewriting it literally)? Or can we say that it is other code, such as a new OS, that can cause a code to act in an unexpected way when operating in relation to other pieces of software on a machine (an out of date computer game running on Windows Vista, per se)? The idea of the code having agency seems to be an idea that has potential of developing a dangerous idea; that the inanimate software has the potential to be animate, or have intention is somewhere I don't want to go yet. Yet all other compromises to deny such seem unsatisfactory, and possibly Mackenzie's four divisions need to be reassessed. Perhaps Barthes needs to be reassessed. I guess this will be my point of discussion.

1/7/08

Similar to previous readings, the Mackenzie reading, Introduction: Softwarily, made me think about software and code in new ways. One idea I found particularly engaging was that of attaching agency to software. He starts off by pointing out there is a difference between an AI’s capacity for “full-blown” agency and the more widely recognized secondary-agency that most software possess. I’m surprised Mackenzie didn’t go into detail on the agency of AI; especially in light of the questions he raises at the end that section. I would like it if we discussed in class whether or not software like that in an elevator or street-light truly possesses any agency. Should agency be attributed to the programmer for outlining what a piece of software can do or to the software for actually performing the action? Is a conscious choice a prerequisite of agency? One could go in circles all day without clearer definitions of what is agency in the world of software.

The other reading, The Performativity of Code: Software and Cultures of Circulation, was mostly about the Linux kernel, its evolution, and the nature of its existence. Reading about where Linux came from and how it exists in cyberspace today was extremely interesting. It also brings some good questions to mind. For example, can Linux truly exist outside of the traditional sense that it is just code? The reading talked a lot about the many incarnations of Linux and it made me wonder if RadioFreeLinux truly qualified as another version of Linux. For me at least, the fact that it operates as code on a machine is central to its definition. It seemed to me that Mackenzie was suggesting that any incarnation of the code was equally valid as Linux. Like the other reading, I think this would make for an interesting discussion during class because one could go back and forth all day.

Mistaken Premises

All page numbers are from the PDF link sent to the class.

In The Performativity of Code: Software and Cultures of Circulation, Adrian Mackenzie attempts to describe the role of the Linux kernel as a cultural object. He walks through much of the history of Linux, and examines how different aspects of its development may have affected how it developed into the cultural force it is today.

While he makes many valid, interesting points, much of his analysis is flawed due to inaccurate or incomplete background research.

One major component of Mackenzie's analysis of Linux revolves around its licensing scheme, under the so-called "copy-left" (as opposed to copyright) GPL. Mackenzie cites how the GPL "prevents the software itself being sold," (Mackenzie 6) which is a common misinterpretation of the GPL. The GNU Project, the group responsible for the GPL, has a page which addresses this:

"Free software is a matter of liberty, not price..."

Mackenzie repeatedly alludes to the fact that there is no money necessarily involved in acquisition and use of Linux, but I question what portion of his conclusions are affected by the fact that working on free software challenges only notions of accessibility, not necessarily economics.

Mackenzie also ascribes great weight to the fact that Linux started out testing some i386-specific hardware (18), and while he does not explicitly state it, he implies strongly that Linux was the first to do so. However, Microsoft of all companies was selling its own version of UNIX - Xenix - which ran on the i386 processor, 5 years before Linux was conceived. So it was not Linux which brought UNIX to commodity hardware. Linux certainly made it more affordable, but was not the first to make UNIX on a widespread hardware platform a new possibility.

Similarly, Mackenzie also stresses how Linux's plethora of differently branded distributions differentiate it(11). He alludes to UNIX having a similar past when he mentions how the lack of official support for UNIX "forced users to share with one another" (16). But he neglects to explore the depth of this influence, as not only did UNIX users share with each other, but UNIX implementations splintered in various ways, similar to the divides between various Linux distributions. The most significant fork eventually became its own operating system, Berkeley UNIX (also simply called BSD), which has since evolved into a number of others: notably FreeBSD, OpenBSD, NetBSD. Reading the names alone hints at the philosophical underpinnings of the branches. It has often in fact been theorized that were it not for a lawsuit against Berkeley over BSD (around the same time as Linux's inception), BSD would have held the place currently possessed by Linux: http://en.wikipedia.org/wiki/USL_v._BSDi

Despite these shortcomings, Mackenzie accurately detects the cultural icon that Linux (or more generally openly available kernel code) represents. I have a copy of a book, Lion's Commentary on UNIX 6th Edition, which contains the full source code and explanation for 6th Edition UNIX. The book was originally a pair of manuscripts produced at one university, and then circulated for years via photocopy, and it is a fond memory for many experienced kernel hackers. I'll bring it to class tomorrow. It's interesting to me to see things similar to this (verbal readings of Linux source), which I simply absorbed in the past, drawn out as significant trends in a larger culture.

After reading "The performativity of code: software ad cultures of circulation" by Adrian Mackenzie, I was interested in the idea of one program cloning another program. What made Linux better than Unix was that people could update the software and get help with it. Although Unix did not make any profit and Linux was free, it reminded me of the conversation that we had in class last tuesday, when someone asked why we rely on technologie advancing. Another student answered that it was for money. However, in the case of Linux, it was not for money but for freedom. Today, companies more often than not 'clone' software ideas from other companies. For example, shortly after Apple released the iphone, Verizon released a phone that also featured a touch screen. Modern situations like this indeed do perform based on money. Advances in technology exist in order for corporations to make a profit. The creator of Linux seemed interested in the idea of software freedom therefore creating a program "by hackers, for hackers." My question is, are people actually satisfied with the limited freedom that many software gives us, or do people feel that it gives us enough freedom? Should more people create software clones for freedom witout profit? Or has this idea become against our cultural norms? Do we want the freedom or are we comfortable with the amount that we have?

Tuesday, February 5, 2008


This is a famous screen of Word at it's worst, it was always in the back of my mind when reading about the mountain of features in Fuller's Microsoft Word article.

When Office 2007 premiered, the UI was redesigned. The tabbing and context sensitive menus where meant to eliminate the search for features by presenting those features which the user would more certainly need based on the tasks that the user was performing. The reorganizing of icons into more work-driving tabs, also meant to elevate thinking and instead allow the user's instinct to guide them to the proper tab. It was surprising and worthy of a note, that many first time users felt that they needed to learn a whole new system, which was designed to eliminate the concept of a system. Either the Usability experts missed their mark, or (what is quite probable) the old system was so ingrained into the users' habits that it took some effort adjusting to a more intuitive interface.

What particularly rings a bell is quote from Heim in Fuller's It Looks Like You're Writing a Letter: "As the user learns the new system, the language installs the user in the system." I believe it is quite relevant to acknowledge the reserve: the system installs itself in the user. The user conforms to a model defined and designed by (in this case) the usability experts at Microsoft. In the reading from last week, Fuller mentioned Alan Cooper's HCI approach, which is driven by a "stereotypical user": "they are imagined as full 'characters', users of the system which is reworked...in order to meet an aggregate of their needs." It is necessary to acknowledge that Microsoft's approach is just this, these 'characters' are referred to as archetypes and are often referred in design documents and drive the feature scenarios. Fuller acknowledges this as a flaw and even mentions the disappeared of the user from system's HCI design. In an alternative view, we can even take Fuller's argument and use it to counter the first quote, the users are not installed into the system, their archetypes are packages within and it is only the reverse, the incorporation of themselves into the model, that holds true.

What is also interesting to note, is that the main bar which was described by Fuller as showcasing most essential features (in contrast to text animation which is hidden deep within the font formatting menu), is often echoed in word professing software, such as this blog, gmail, etc. It makes me think of last week's Fuller reading that touched upon Open Source software and its fatal attention and conformance to propriety software's standards. Is makes me think whether this standardization of the main bar is because this is the most efficient way to provide access to the what is considered the essential word processing features? Of has this standard installed itself so much into ourselves that we have made it so without question?

Monday, February 4, 2008

I found this week's reading especially interesting in relation to last week's material, particularly when comparing Fuller's "A Means of Mutation" with Galloway's "Code" and our discussion about executable code and performative language. Whether or not we agree that code is the only language that is executable, one of the most important aspects of code as it is most frequently used is that it is functional - it makes things happen. Yet Fuller argues that the surface aspect of code (i.e. software and browser composition) achieves the exact opposite. He states that commercial software "is continually dragging this space of composition, network, computer, user, software, socius, program production, back into the realm of representation... rather than putting things into play, rather than making something happen" (64, italics mine). In other words, by focusing on graphic representation and the page metaphor, web browsers are typically coded so that nothing happens, even though making things happen seems fundamental to code itself.

Our reading contains a lot of similar paradoxes, and I think it would be interesting to discuss or try to determine where these paradoxes come from and why they exist. For example, Fuller's article "It Looks Like You're Writing a Letter" suggests that as Microsoft Word abounds with more and more tools that seem to give users more options and more ways of styling text and documents, features such as spelling and grammar checks and pre-defined templates constrict the user's scope in terms of document design and use of language. In Word, tools can be used only as the program intends them to be used. Thus, more tools and options do not free the user, but rather increasingly enable him or her to get lost within the menu structure of the program and its hierarchy of applications. Is this a result of Microsoft's commercial nature, economic/historical/cultural factors, user expectations, or even aspects of the code itself? These are questions I hope to be able to at least touch on briefly during our discussions.

More Metaphors

In A Means Of Mutation Fuller exposes and problematizes the seemingly natural metaphors present in modern-day computer terminology. He exposes these metaphors for what they are, necessary media for the human interpretation of raw data. He cites 'page,' 'desktop' and 'wastebasket' as examples of these metaphors, but we could easily extend the analysis to 'file,' 'folder,' 'button,' 'window,' etc. The sheer volume of such metaphorical representation of pure code are mind-boggling, as is their necessity for the base functionality of the processes that they can perform.

Rather than beat this to death, I wish to draw attention to another metaphor left implied but unarticulated in It Looks Like You're Writing A Letter. When describing the interface construction of word processors, namely Microsoft Word, Fuller tells us "you have to go through several layers of interface to switch off 'Grammar' and 'Spelling,'" and thus reinforces, as he does several times throughout the essay, the metaphor of depth that runs rampant to the logical organization of human computer interfaces. We can observe this in our own everyday computer lingo: it is easy to imagine saying "I had to look around inside My Music for, like, half an hour until I found that MP3," or, "the virus is so deeply embedded in your system now, you'll have to go inside your system registry and edit the file." Perhaps even the code itself is a metaphor of {[(depth)]}. What would be interesting to examine, given that the metaphor of depth pervades any properly deep analysis (let's unpack what that means...) even in the non-computer theoretical world, is how this metaphor works with respect to code as it does in everyday parlance with respect to truth.

Sunday, February 3, 2008

Saturday, February 2, 2008