1. I'll Now Take Your Tech Questions at Arapahoe Library District →

  2. kyrianne:

    I IMMEDIATELY recognized this

    Dirk baby you have fine art and Blucifer and everything I love i love you please be my child for real

    The sculpture is by Sayaka Ganz and it’s called Emergence

    He looks like he only has the black half of the sculpture though :C

  3. fruitsoftheweb:

Magpievortex ring wake

    fruitsoftheweb:

    Magpie
    vortex ring wake

  4. fruitsoftheweb:

    Archaeopteryx flight mechanics modelling

  5. fruitsoftheweb:

A hovering hummingbird.

    fruitsoftheweb:

    A hovering hummingbird.

  6. nopenothanks:

There is more than one abandoned ballroom in Detroit.Photography by Yves Marchand and Romaine Meffre.

    nopenothanks:

    There is more than one abandoned ballroom in Detroit.
    Photography by Yves Marchand and Romaine Meffre.

  7. fuckyeahabandonedplaces:

A Good Read by bpdphotography on Flickr.

    fuckyeahabandonedplaces:

    A Good Read by bpdphotography on Flickr.

  8. softpyramid:



Francis AlÿsParadox of Praxis 1 (Sometimes doing something leads to nothing)Mexico City, 1997
 
 
A performance piece in which Alÿs pushed a block of ice around Mexico City until it melted away into nothingness.  "Fail, fail again, fail better.” - Samuel Beckett

    softpyramid:

    Francis Alÿs
    Paradox of Praxis 1 (Sometimes doing something leads to nothing)
    Mexico City, 1997
     
     
    A performance piece in which Alÿs pushed a block of ice around Mexico City until it melted away into nothingness.  
    "Fail, fail again, fail better.” - Samuel Beckett

  9. New Art/Science Affinities: A Re-introduction - Matilda, digital →

  10. Stage 4: Live Projected Visualization

    Viewing the Scene

    One of the problems with this visualization that has puzzled me is the poor positioning of the graphics within the openFrameworks window. Adjusting the joint position data as it comes in by mapping it to the screen has been helpful. My latest efforts have involved trying to use an OF camera to simply look at the graphics from a different perspective. After studying the 3D camera examples included with OF, I understood the somewhat obvious solution. Rather than translating the world coordinates to the (I admit arbitrary) dimensions of the OF screen, why not simply match the screen to the Kinect’s view?

    Matching the Kinect View to the Screen

    I created a Kinect object and pulled its width and height for the window dimensions. What resulted was simply a smaller version of the same problem. The visualization was still caught in the upper right quadrant of the screen. Mapping the coordinates to the Kinect dimensions after they came in did not fix this issue with the graphics. Because this method was unsuccessful I abandoned it.

    Testing Each Line

    I decided to test each line of code I’d written toward this problem to see where things were actually breaking down. I commented out all the code for the cameras and began where I started. The information from the Kinect, and it seems any other camera device, comes into OF upside down so the common fix is to use ofScale(1, -1, 1) to correct this. The side effect from this scaling, however, is that the graphics for the joints of skeleton data walk off the screen. To fix this I was using ofTranslate(x, y, z) to reposition the graphics. I was using ofTranslate (ofGetWidth/2, ofGetHeight/2, 0), though, and this was causing the shift to the upper right quadrant. The graphics were drawing from the repositioned (0,0) point, the center of the screen. I’m guessing that while ofScale(1, -1, 1) flips the graphics back to the correct direction, the graphics draw from (0, -height). The graphics now start in the center of the screen with ofTranslate (0, -768, 0).

    The Viewing Frustum

    Looking over the advanced3dExample has introduced me to a nice phrase for the space I’m trying to get a handle on — the viewing frustum. The odd part of the viewing frustum is that it’s basically inverted on the screen. As you get closer to the real world camera, the Kinect, its real world frustum gets smaller. However on screen, the skeleton graphic appears to move into the wider area of the frustum as this occurs. Right now, the graphics are centered in the screen, but the 3D illusion isn’t effective because real world movement doesn’t translate to visibly large changes on screen. If I can correctly position a camera on the scene, the user should be able to see the dimensionality of the 3D graphics rather than a very 2D looking representation. (Although, we know that it is 2D because it’s a screen, but I mean the visual effects that make 2D seem convincingly 3D.)

    Moving Through Space

    The final challenge then came to setting up the camera correctly to view the scene. At first the camera was set to track with the position of the user’s head as an OF example has done. As the user’s head moved from side to side and up and down, the view of the scene adjusted according so it seemed as if you were looking into a room. This perspective would be useful had that been the goal of the project (to look into a room), but the camera is actually supposed to show the viewers a view of themselves. I realized this problem with the camera placement in testing because the 3D graphics only extended slightly beyond me in the motion. In the 3D space I’d created to view my own 3D motion, I had made the mistake of “holding the camera”. I fixed this by simply making the camera track the z-coordinate position of the user’s body in the negative. This final tweak produced the main result I was after.

    Keeping the Skeleton from World to Screen

    One of the next issues I discovered with the tracking is the Kinect and OpenNI’s behavior when too many joints go out of view. The Kinect has a minimum distance of about 6 feet for one user, 8 feet for two (although this has decreased for the XBox One sensor). Translating the real world space to the screen space revealed some quirks. Users could “dive into” the graphics within a small distance between the Kinect and the back of the real world frustum. The graphics reacted most sensitively to the motion within this space. Advancing closer to the Kinect beyond this point caused too many joints to drop out view for OpenNI and the Kinect, so the graphics “died” at that point. The same thing happened when users backed up too far and passed beyond the “back wall” of the real world frustum. To keep the tracking going for as long as possible, I started tracking the z-coordinate of the user on different joints. The head and feet were the worst joints because they were the first to disappear from view and also the joints with the largest range of motion. Using these two caused the visualization to move too erratically. However, using a joint such as the torso provided so much stability that the visualization didn’t have enough movement. What worked best was tracking a hip joint, which had enough movement in dance but a small enough range of motion to keep the visualization stable enough that viewers could understand what they’re seeing.

    Setting the Scene

    From Screen to Projector

    Thus far the visualization has been shown on my laptop screen. However for viewing, I’ve decided that it may be more effective to project the visualization on a screen so viewers can connect the size of the motion graphics with human proportions and from there dance movement. Viewing the visualization projected on a surface, however, made some adjustments apparent. The visualization would need to “stand” at the bottom on the projected surface. The two foot joints for the skeleton data would need to be at the bottom of the screen so that when projected a viewer can feel as if he or she is standing next a figure of typical human height. The visualization currently sized the skeleton so that it was near the of the frustum, so it was small and centered on the screen.

    How would I solve this? The figure can’t move into the middle of the visualization as the tracked user moved backwards because that doesn’t mimic real life. Thinking about it more it seems like the motion would only seem realistic if it remained at the “front” of the screen, that is as close as possible to the viewer without disappearing out the bounds of the screen to preserve the illusion of 3D space. I consulted Brandon Gellis on this. He offered that the wall could be thought of a window into a space extending beyond the wall. This seemed to be the most logical interpretation for the visualization.

    Creating an Atmosphere for Dance

    In the final presentation of this phase of the visualization, the projection was accompanied to salsa music and classmates were invited to try out the visualization by dancing or moving in front of it. This setup offered a new idea. Originally my aim was to view the visualization as a product of two separate processes— motion capture and visualization of motion data. However, manifested as more of a live installation piece, I was reminded less of participant observed methods and more of the recording and tracking devices used for collecting information on animal behavior in the wild. The observation is more passive, and the devices are made to blend in with the environment so animals don’t notice them or disguised to be somewhat inviting. The presentation offered the question of whether a passive interactive installation that invited dancers to perform in front of it would be a useful setup. 

    Output

    image

    image

    image

    image

    image

  11. Stage 3: Live Motion Visualization

    The objective of this stage was to create a live visualization of motion data.    Previous stages aimed to capture motion data in a useful format and to visualize motion data collected. This stage will introduce the challenge of the visualization making a proper interpretation of motion data as it’s being collected and visualizing it to the user in 3D space.

    From Synapse to NI Mate to OSC

    Costs and Benefits

    In previous stages Synapse served as the middleware between the Kinect and openFrameworks. While it served well in motion tracking, it had the disadvantage of requiring users to awkwardly perform the psi pose in order for the Kinect to pick up their skeleton. Because this long range goal of this work is to produce a recording method that is natural for dancers, the psi pose was undesirable. In addition to that, Synapse could only handle one skeleton at a time. My professor suggested an alternative middleware, NI Mate. The advantage of NI Mate is that it uses other detection methods to pick up skeletons and can accommodate multiple users.

    Translating

    Moving from Synapse to NI Mate within OpenFrameworks did not seem difficult. Both middlewares basically send and receive OSC messages between the Kinect and the computer. It turned out that in order to get the visualization to work with NI Mate, I’d need to uproot the OSC communication that was embedded in the SynapseStreamer class. NI Mate didn’t had a specialized class available open source as Synapse did, but after I examined the Synapse class more closely, I realized it was basically a wrapper for OSC. The translation process turned out to be the most confusing part of the process. Classes were included in the header file of other classes. As I cut pieces of code out, I had to trace back which lines it required to work outside the class. In addition, once it seemed like I’d extracted all the code I needed, it wasn’t clear that the program was actually receiving any OSC messages. I ran the program several times with no response in the visualization and no coordinates recorded to the data file and eventually the program began to hang. My professor helped and ran the example OF OSC program with me, which ruled out the idea that the OSC messages were just not getting through and meant the code was wrong somewhere. The port number was correct 7000 and changing it didn’t make a difference. It turned out that OSC only needed a receiver object, and the code that ran the sender object for Synapse was interfering with the communication. Removing this helped remove the block in the program. However, at the time of demonstration, OSC was still not connecting with the visualization, so for the graphics to work Synapse was run.

    "Another dimension, another dimension, another dimension…"

    Lost in Space

    The other challenge in this stage was properly configuring the motion to represent a third person view of the user’s motion in 3D space. The image output for the recorded motion kept cropping the application window to the bottom left quadrant of the screen. In addition to this, within the application window the skeleton graphic that showed users where they were located within the screen remained in the top right quadrant. I was trying to determine if this was caused by the mapping of the z-coordinates from OSC to the screen. In addition to this the visualization graphics were displaced from the tracking skeleton. The visualization graphics were definitely generating according to and along with the users’ motion, but weren’t in the same place on the screen. Adding a camera to the scene was one possible solution to this. 

    After consulting another student who’d worked with 3D motion, she mentioned that the space represented within the screen may be the worn size for the motion. She explained that space in the screen was like a room, and the back of the room was visually smaller than the front. This made sense for why the tracked skeleton appeared to walk on, over, and past the viewer, but I was still confused by the vantage point. If the skeleton was getting closer, it seemed more sensible that it should get large, fill the screen, and feel “in my face”. Walking out the top right corner of the screen didn’t seem to make sense. Apparently the solution to this was to add a camera to the visualization. Cameras allowed a more realistic portrayal of 3D motion than without. Otherwise, it seemed like the OF was doing it’s best to showed what 2D figures should should visually do to imitate 3D motion. Adding an the OF easyCam was somewhat helpful, but controlling it required the mouse, which meant I had to stop testing to adjust it in the middle of recording.

    For demonstration, the 3D graphics were working somewhat better but the tracking was off in that visualization filled the whole screen and the mesh did not dissipate over time. This caused the screen to eventually filled with colored, flashing blocks of trails. So while the 3D graphics were achieved, they obstructed the view of the scene and the skeleton making it difficult for users to interact after a short period of time.

    Output

    image

    image

    image

    image

  12. Stage Two: Motion Data Collection

    Objectives

    The objective of Stage Two was to create a method of collecting motion data that can be used out in the field for gesture and movement research. The system needed to create a file of timestamped motion data and export time sequenced images of the motion in action.

    Successes

    This stage was successful in that the system created does create a text file of motion data and that the system does create a series of images of the motion captured over time. The main objectives were achieved.

    Failures

    This stage was not successful in that the system sometimes creates un-readable image files and that the live motion tracking visuals are boring and 

    Necessary Improvements

    While the back end tasks are working, the visuals for this stage could be much better. Also, the feedback of the motion data into OpenFrameworks is a little off. The tracked user’s movement does not affect the graphics in a straightforward way like when someone sees himself in a mirror or video camera. This requires better programming within fox’s 3D space.

    Tested Materials and Methods

    Processing
    SimpleOpenNI
    OpenFrameworks
    ofxKinect
    ofxOpenNI
    Synapse
    SynapseStreamer
    Microsoft Kinect Sensor

    Final Materials and Methods

    OpenFrameworks
    Synapse
    SynapseStreamer

    Part I: Determining a Programming Setup

    This stage was the first time I experimented with hacking into the Kinect directly for its data. Cory Metcalf introduced me to some more of the Kinect’s quirks and the types of motion data information it offers by instructing me how to collect its data using MaxMSP and Synapse. This was a great introduction into how the Kinect outputs its motion data, but as a text-based programmer MaxMSP would not be my ultimate solution.

    My first text-based approach to gathering the data was to use Processing and SimpleOpenNI. OpenNI allows developers to create software for natural interface devices, and SimpleOpenNI provides a wrapper for this middleware so Processing developers can create interactive projects with it. This seemed like a straightforward, easy approach as Stage One used Processing. This method was abandoned after further research into the SimpleOpenNI library. It’s original creator no longer developed for it or provided support for its development community. His example programs were added to the Processing forums with a notice to the community to help each other figure the library out on their own.

    Keeping in mind that the methods created here will ideally be useful to anyone who intends to repeat this project, it made sense to move on to another solution. I decided to try OpenFrameworks. This option would be more difficult because I was now used to programming in Java, not C++. After setting up the latest version of OpenFrameworks, I researched Kinect libraries for and found that ofxKinect had already been integrated into the native build. My excitement about this did not last long. I discovered that this library did not provide skeleton tracking. Again I searched for this particular library feature and found ofxOpenNi, a wrapper for OpenFrameworks. Progress seemed to ensue again.

    Though my curiosity about ofxOpenNI has not dimmed, it soon became unclear how to extract the Kinect’s joint data from this library. The example program made it clear that this was certainly possible, but I could not understand where it was coming from after studying the code. Again I search for another solution and came across an example of a program that, again, used Synapse but with OpenFrameworks. The program took advantage of a Synapse class written specifically for OpenFrameworks. After a brief digression into trying to download Subversion, I figured out how to use this class to read out the data.

    Part II: Connecting OpenFrameworks to Synapse

    Using Synapse with OpenFrameworks was very similar to using it with Max. The program needed to be running before your program started to properly collect data. The hinge was understanding how Synapse returns that data, and realizing the difference between vector data, ofxVec3f, and the C++ vector data type.

    Part III: Accessing the Synapse Data to Write and Draw

    Accessing the Synapse data was a challenge for a while. Though the update function could easily write the data, the draw function could not use the same variables to display shapes. After debugging, I realized that the program compiled fast enough that the variables were still empty when the draw function called them. After adding some if statements and boolean controls, the program flowed in a better order.

    Part V: Displaying the Data in Realtime

    After the program flow was fixed, I moved onto creating nodes to display all the joints. This was useful because it clued me in to more weird quirks about the Kinect’s skeleton reading and provided feedback for OpenFramework’s 3D space. Some flipping of the Y coordinates and mapping of the data to fit the OpenGL Window were necessary before I could actually see anything.

    Part VI: Saving Images of the Data

    Saving images of the motion data was probably the easiest feature to implement though unpredictable. The program sometimes wrote broken PDF files and other times wrote them just fine. However, I was able to time the saving to produce one image per second.

  13. Procedural / Generative System in Houdini -- Hyperfeel - "Macro-Textural / Sensual Aesthetic" for Nike by @field_io →

  14. Guide to Camera Types for Interactive Installations / Guest post by Blair Neal (@laserpilot) →