Network discourse

From Howto Things
Jump to navigation Jump to search
0.30402944246776265 Installation view

Introduction

There are a few concepts behind the idea of self-organized systems that need to be highlighted: a) processes changing over time within the system and b) elements interacting among each other within the system. If the first concept is bound to vitality and evolving processes, then the second concept is about the integrity of structures within the systems. These concepts are important for the future development of technologies and human relations to technologies (or vice versa), including such questions as ethics and etiquette.

Considering that interaction, creativity, and emergence in human-machine environments are very important,[1] the interaction of elements within the physical environment requires some technical in-depth analysis. This will provide a more substantial conceptual background for the artistic project, 0.30402944246776265, which is supposed to communicate to the audience the idea of self-organization in non-uniform networks and artificial systems being able to demonstrate a certain level of vitality and intelligence.

Physically, computer networks are connected to each other non-hierarchically and do not necessarily depend on intermediaries, e.g. servers routing digital information to the end computer. Nevertheless, computers within the computer networks are usually connected over the network switch or wireless router, in such a way forming a decentralized (and at the same time semi-hierarchical) network. In order for the computers to interact with each other, the networks are based on certain rules set across the different protocols. The Internet Protocol Suite (TCP/IP) is one of the most widely used set of protocols forming a network of interconnected computers. One of its abstraction layers, the Internet Layer, facilitates the interconnection of networks, enabling digital data flow among computers. Its Internet Protocol (IP) defines the fundamental address spaces that are then controlled by Domain Name Servers (DNS). In other words, the Internet is defined by TCP/IP and DNS (Braden 1989).

Now, in order to be able to consider dynamics in computer networks, it is important to mention that the physical network is distributed, meaning that computers are connected over switches and how they “communicate” among themselves is a matter of implemented software. The connectivity of the prevailing Internet is achieved over a routing process and information forwarding on the basis of predefined routes to various network destinations making the network decentralized (Braden 1989). In contrast to a decentralized network (such as the Internet), distributed data routing is also possible (Boehm & Baran 1964) (Fig. 10). Keeping these network architectures in mind, the further analysis will consider possibilities for self-organization within the Internet.

Being the most prevailing, the Internet is also seen as a layer for other networks that can be constructed in order to permit the routing of digital data in predefined lower level networks. For example, the Local Area Network (LAN) is usually used for small-scale networks and is defined by physical space, like a room or house. A Wide Area Network (WAN) or Virtual Private Network (VPN) can overarch a much broader geographical area but are still used in a defined network by businesses or governments. Those networks are usually constructed upon hierarchical routing rules using intermediary servers, and self-organization within those networks could only be possible with a set of predefined exceptions, such as elements a, b, and c, which may become self-organized if all of them are connected to an element d.

In contrast to the network systems mentioned above, other network concepts are based on Peer-to-Peer (P2P) or computer-to-computer connections, which may have a decentralized or distributed character better suited to self-organized systems. However, as was mentioned earlier, they would still be dependent on the Internet layer organized upon a semi-hierarchical DNS system, which is controlled by the Internet Corporation for Assigned Names and Numbers (ICANN). This corporation is responsible for managing all IP addresses, and it therefore makes the Internet centralized from the perspective of IP addresses provided.

Self-organization Within P2P Networks

As introduced in the “Computer Networks and Routing Systems” subsection, physically, the Internet layer bears a lattice structure and has no hierarchically predefined architecture. It has been noted that, on top of the physical layer, the TCP/IP protocol implemented for the prevailing Internet is organized upon a hierarchically organized IP address system, and the capacity for self-organized processes within computer networks is therefore limited. Nevertheless, self-organizing processes are not an exception and are often in use in, for example, local networks computers are usually allocated dynamic IP addresses in order to connect to the Internet. Self-organizing processes are also important in P2P networks in order to localize other computers within the network and to avoid the server-client hierarchy of the prevailing Internet architecture.

Given that the hierarchically organized Internet is vulnerable for technical and social reasons, P2P networks often focus on the security of the information transmitted. Aside from security purposes and reasons of communication (Tor, I2P), P2P networks also focus on capacity in file-sharing networks (Bittorrent, Gnutella), decentralized searches (YaCy), and storage (Freenet), or, for experimental environments, they are set up for analyzing life-like processes (DREAM).

Freenet Architecture

If storage of the YaCy system is only created for keeping indexes and for local architecture purposes, the Freenet storage system is designed to function for the information provided by the users of the network. The Freenet storage system operates as a self-organizing P2P network that uses allocated free space of personal computers to create a collaborative virtual file system and therefore becomes a most intriguing feature in terms of shared information within the distributed networks.

The computer running Freenet software, or the Freenet node, consists of an implemented unique routing algorithm, web server databases, and the Freenet Client Protocol (FCP). While the FCP functionality is defined by interacting with third party software, such as jSite62 designed for uploading documents on the Freenet network or Freemail as an alternative email system, the Freenet routing algorithm and its web server are part of local structure bearing specifics that are described in further detail.

The very first Freenet routing algorithm was very close to the idea of adaptive networks formed in order to adapt to the changing environment: a place or a node where the information request is originated, recorded, and sent to another node in the network that makes a note of the requested key and sends it further into the network until the requested data is found.

Such a design was achieved by sending the following request messages: data request, data reply, request failed, and data insert. Within the designed structure, the initiator node of the request allocates the unique ID, based on encrypted key, to the requested message and a value number – time to live (TTL) – defining how many more times the message may be passed. The message is then sent to a neighbor node that is more likely to possess the requested information. When some node receives a query, it first checks its own store, and, if it finds the requested information, it generates a data reply message and returns it with a tag identifying itself as the information holder. If the information is not found, the node forwards the request to the node in its table with the closest key to the one requested. That node then checks its data store, and so on. If the request is successful, each node in the chain passes the data reply message back upstream and creates a new entry in its routing table, associating the data holder with the requested key. If the requested information is not found within the allocated TTL number, the last node in the chain generates a request failed message and replies to the previous sender of the request (Clarke et al. 2002, Clarke 1999) (Fig. 10.). Besides having a completely different concept for routing search requests, such a system is unique in such a way that it could function without intermediary servers such as DNS servers.

0.30402944246776265

The Installation titled 0.30402944246776265 was developed during 2013 and 2014 as an artistic project extending theoretical research on self-organization in non-uniform computer networks. It unfolds as a set of computers showing an interaction of elements between each other.

The installation uses n number of computers (or nodes) and software that enables data exchange among them. The viewer of the installation is allowed to move around and interact with computers, thus becoming part of the overall ensemble. The computers are located next to each other so the viewer can compare the animated graphs visible on the monitors.

In order to emphasize the diversity of the surrounding elements, a wide range of older and newer computers are used for the installation. The variety of computers encourages the viewer to consider the technology in our environment. Why does the installation use outdated computers? How outdated are the computers? And why computers and not, for example, TV screens? The use of older computers, first of all, can suggest rapidly aging technology and technical evolution, which should further raise questions as to what is next and where is technology leading us. Secondly, when these decades-old computers are compared to up-to-date tablets and smart phones, one could think, well, technology has become much smaller, much more user-friendly, more streamlined and therefore less accessible in terms of computer architecture. It follows that the near future suggests even more direct, seamless interaction with computers, and humanity will possibly merge with computers or even become computers, as predicted by futurist Ray Kurzweil in his book The Singularity is Near (2005). Thirdly, one usually interacts with computers directly. Although interaction with the computers is not precluded in this installation, the configuration suggests that the computers do not require further human input and they are operating independently.

The virtual environment shown on the computer screens suggests that something is happening between the installed machines. The animated graph on each screen is a visualization of the activity within the computer network. The graphics show, in real time, neighbor nodes and data traffic between them. As the location of each node in the graph is marked by a distinct color, it is possible to trace which node is represented on the graph and how data chunks are sent between the nodes. The graphics are simple, animated geometric forms that, on one hand, might deliver a message of computation concepts emerging from simple rules and, on the other hand, might indicate emerging creativity via simplified interactions between the different elements. The simplified computer screen animation might also refer to early computer graphics or science fiction aesthetics, when such aesthetics were relatively sophisticated to the cultural eye and proposed that, in the near future, we would exist in an environment where computers were as intelligent as humans.[2] In this respect, the viewer of the installation might consider comparing such aesthetics to contemporary 3D graphics, or otherwise the spacial representation of physical things,[3] and similarly to hardware aesthetics to try to shift him or herself back and forth in space-time.

Footnotes

[1] Consider, for example, John Cage's piece 4'33 composed in 1952 or Tehching Hsieh's One Year Performance from 1980-1981.

[2] Consider, for example, the Spacewar computer game from early 60s or George Lucas' “THX 1138” or “Star Wars” from early 70s.

[3] Consider, for example, Steven Spielberg's “Minority Report” from the early 2000s.