there’s very many things to be done as of now, but first, my thoughts.
1. ideas
i’d like to draw from my interest in humans, that’s the main focus of this project, people and how they interact with computers - but in the absence of physical computer looking objects, but rather a camera and a screen. a lot of my work revolves around dumbing down computing machines, and this is something along those lines.
i think that the ultimate goal for now is to have the installation output something completely useless from the data that is offered to it while evoking the feeling of being under surveillence.
2. concept
the project as of now (ohne Name) is the result of a bunch of iterations.
below are some conceptual sketches of what the final installation is meant to look like. it is in some ways reminiscent of the cold mechanical feeling given off by surveillence cameras and the odd feeling of being in a coin-operated photo booth (i know that they are contactless now, also funny how contactless is just a word and not an adjective anymore).
the original idea was to emulate a robot arm, degrees of movement, articulation and all, but i wanted my installation to have presense and magnitude(?), so i opted for a more linear approach that would be easier to get design in a modular fashion and then have the housing 3D printed in a very limited time frame.
most of the technical aspects of this linear and rotational movement have to be worked out in proper detail, but the project can function (sans gravitas) as a fixed module without taking away from the intended outcome.
the main feature of this installation is the camera module that can rotate about an axis to follow a person’s movement, photograph them, and then a visual interaction element (which also serves as my submission for the creative coding class) - I intend to use either an Arduino Nano 33 BLE Sense or the Arduino Portenta H7 alongside openCV functionality or the openMV framework to train a simplistic machine learning computer vision model.
3. how i’ve got to this point
i really really really wanted to emulate the robot arm from that one video that cleans up some sort of blood-like viscous goop that surrounds it -> but i wanted it to be interactive -> i want to explore the uselessness of machines (funnily enough i am trying to use some state-of-the-art electonics to emulate what an old computer with a peripheral unplugged is capable of).
that, and i really want to test the theory that people, when under observation (or percieved observation), as long as they themselves believe that they are being watched, tend to behave in a more self-conscious manner. i wanted to have the robot arm follow people around the room, to pick one person out of a crowd and put them on blast for the audience to see -> i intend to borrow from this visual output and carry it forward in the final installation.
4. final presentation (as of now)
as stated earlier, i want it to be an interactive piece, preferably set up in the greencoat dark lab for people to experience and partake in. i’ll have a more concrete display plan around w3/w2. the visual outcome generated from this project will be used as part of my submission for the creative coding module as well.
5. where i stand as of now
i’m going to start with code for computer vision, as that is most crucial to the success of this project, which i hope to have up and running by the end of w4, alongiside work on the visual output code as well.
as of 03/11/23
notes: (1) it needs a cooling unit, it gets hot (2) mac mini + external display for the final output (3) mmm machine learning time
as of 07/11/23
thats what i’m calling it
it is some sort of analogy performace-piece that involves a camera and a display, with audio elements and spoken word.
my mother is the camera and she is the display. i am some version of my mother. casting aside tedious dialogue about children taking after their parents or actively avoiding becoming the sort of people their parental units are, human behaviour is learnt, and in some form chewed up and spat into our developing minds. this is what i want to evoke through this piece. another one of my ideas for an interactive performace piece.
what i envision it to look like ->(that is a picture of me and my mum holding hands) -> i think i will 3d print an enclosure for the arduino, but it might be a massive print, probably if i have the time left over to do so.
i tried to get the machine learning/computer vision element working, but have hit some sort of wall (I don’t know how to go ahead and run the code off the arduino). i’ve been at it for days, perhaps this was too big of a venture for five weeks with all that is going on right now, i’m unsure of what i will actually hand in, perhaps only this documentation.
i have been reading the tiny ml book, and every bit i read has me more confused. i’ve got the arduino nano, perhaps i could substitute the portenta with that. i don’t know when i can 3d print. if only i had more time, why has life become this confusing mess of too many things happening all at once aaaa. this feels like a cop out, but i seem to be stuck at the moment. i am sorry if nothing works out.
i cannot finish it now, but i have the start of it, i will buid upon it, but for now, my process is all i have to show.