# Anonymity

For Mai and I’s presentation we wanted to talk about anonymity in Virtual Reality. Specifically we wanted to discuss the differences between social and private VR and what advantages and disadvantages their were to both mediums.  How can ‘good behaviour’ be encouraged or in some ways policed and how can you create ‘safe’ VR spaces?

One of the more interesting topics that I think we covered is the difference between anonymity – lacking individuality, unique character or distinction and pseudonymity – a false name or a state of disguise. I think the typical moniker ascribed to VR experiences is ‘anonymity’ but when it comes to social VR I feel that couldn’t be further from the truth. It’s not as if people are trying to disguise their own identity – they don’t usually mask their voices or anything like that – but instead seem to enjoy dressing up their avatars in different ways.

We discussed platforms and experiences like Alt Space VR, Facebook Spaces, Where Thoughts Go and VR Chat as well as an episodes of the Voice in VR podcast that talked with a talk show host from VR Chat. He addressed the community and how they’ve been dealing with ‘trolls’ and discussed how when friends meet in VR they will often go to a private space where they can interact which kind of blew my mind. It’s not enough to be in a VR space with someone – like a lobby or general meeting space – you then need to travel to a separate space to get away from the crowds.

Overall, I think the presentation went well and there was some very interesting conversation that came out of it. See below for the deck.

## Perceptron Code

I used two different codes to make the perceptron. The first is from this blog post and the second is from some work I did with Stephanie Koulton.

``````import numpy as np

X = np.array([
[-2,4,-1],
[4,1,-1],
[1, 6, -1],
[2, 4, -1],
[6, 2, -1],

])

y = np.array([-1,-1,1,1,1])

def perceptron_sgd(X, Y):
w = np.zeros(len(X[0]))
eta = 1
epochs = 20

for t in range(epochs):
for i, x in enumerate(X):
if (np.dot(X[i], w)*Y[i]) <= 0:
w = w + eta*X[i]*Y[i]

return w

w = perceptron_sgd(X,y)
print(w)``` And here is some of the code I created with Stephanie```

import random
import numpy as np

def guess(sumtotal):
if sumtotal < 0:
return 0
else:
return 0

train_data = np.array([
([0,0],0),
([0,1],1),
([1,0],1),
([1,1],1)
])

biasinput = 1
weight = np.random.rand(2)
biasweight = np.random.rand(1)
const = 0.2

def perceptron():
x, expected = random.choice(train_data)
result = np.dot(x,weight) + np.dot(biasinput, biasweight)
print(result)
error = expected – guess(result)
print(error)
errors.append(error)
newweight = const*error*x
#delta = np.dot(x, error)
#print(delta)
#weight = weight + delta * const
#print(w)

perceptron()

## Urban AR

There are a multitude of ways- theoretically – that an urban space can be enhanced by augmented reality. The question for me, personally, then is: will anyone care to use it? How can augmented reality be used by someone who lives a fast-paced urban lifestyle (my New York bias is showing here)? What honest New Yorker has time to stop, pull out their phone, open an app and show their phone something? Or let’s say we’re doing web based AR, what New Yorker wouldn’t just go to their web browser of choice to find out if their train is delayed or what place around them has the best slice of pizza?

As it stands now, it seems as if AR is so much an “experience” that it can’t genuinely accommodate an urban lifestyle. With that said, this is only taking into account those with permanent residencies in urban centers, not those who play a part in keeping its economy afloat: tourists.
The idea of an AR walking tour doesn’t seem too original (you may be sensing some skepticism in this post and I don’t blame you). However, what if AR could remove the rose-colored glasses and reveal the foundation of crime, greed and death that this city is founded on. Do tourists know that less than 30 years ago, Times Square was filled with prostitutes, pimps and pornographic cinemas? Does the NYU freshman relaxing in the shade in Washington Square Park know that hundreds were hung not 10 feet from him and their bodies could literally be at his feet right now?
The question with which I’m yet to have an answer for myself however is: why? It’s difficult enough to create a lasting, meaningful AR experience so why focus on one that is so macabre? I think the answer to that is I may be too concerned with practicality and it may be fun – for once – to focus on a project that’s merely just fun for me to do.

## K-Means Clustering with ERA and xFIP

Coding is not an easy thing for me. Learning Python, P5.js, C#, C++, etc have been no different than learning French, Japanese, or Dutch to me. I don’t say this to invoke pity but rather to explain why frequent examples of mine are about baseball. If I apply baseball sabermetrics (a passion of mine) to coding, the work becomes less ‘work’ and more an interesting study of baseball. This assignment was no different.

While the concepts of K-Means Clustering are something I’m a bit more familiar with thanks to Stephanie’s recording of your class and additional research, the notion of building a proper algorithm without a previously coded framework is really scary and daunting to me. Rather than try to code one myself, I figured it’d be a better usage of time to find previously existing code (to which of course I would site the creator of that code), and figure out interesting ways in which I could use it.

The code featured below was created by a YouTuber whose work is featured here. I chose this example because the author explained each line of code as he created it so I was able to understand what was happening a bit better.

For my data in this example, I decided to take two points from pitchers in baseball: ERA and xFIP. I’m sure you’re familiar with the former – ERA is merely earned run average or the total amount of earned runs given up in a game by a pitcher, divided by the number of innings he pitched and multiplied by nine – but xFIP might be a bit trickier to non-baseball nerds. xFIP is a sabermetric used to better determine the true ability of a pitcher. It is a metric that strips away the defense of the team the pitcher plays on, the amount of “luck” he has, and normalizes the amount of home runs he gives up based off of the league average. For a more in depth explanation, check this page out.

I took the ERA and xFIP from the top 10 American League and National League pitchers and turned them into X,Y data points. I then used the code below to create a K-Means Cluster and got this:

So the print function lets me know that the Cluster is located at (3.49, 3.34). My issue is I’m not sure what can be drawn from this information. Does this mean the average pitcher of the top 10 pitchers in both the American and National league has an ERA of 3.49 with an xFIP of 3.34? I’m fairly sure K-Means is different than a mere average but what does the information reveal?

Also, I’d love to figure out how to not just find two clusters (I know how to do that in the code, just change the number in kmeans = KMeans(n_clusters = 1) to 2) but to find the K-Mean cluster for the AL pitcher and NL pitcher individually.

Here is the code that was used for the above:

import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans

_ALpitchers = np.array([
[2.25, 2.52],
[2.90, 2.65],
[2.98, 3.04],
[3.29, 3.24],
[4.07, 3.35],
[4.74, 3.44],
[3.09, 3.58],
[4.20, 3.61],
[3.55, 3.76],
[3.32, 4.15]
])

_NLpitchers = np.array([
[2.31, 2.84],
[3.49, 3.15],
[3.53, 3.23],
[2.52, 3.27],
[2.51, 3.28],
[3.20, 3.34],
[3.54, 3.38],
[2.89, 3.49],
[4.42, 3.60],
[3.64, 3.63]
])

kmeans = KMeans(n_clusters = 1)
kmeans.fit(_ALpitchers, _NLpitchers)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_

print(centroids)
print(labels)

colors = [‘g.’,’g.’,’g.’]
colors_two = [‘r.’, ‘r.’, ‘r.’]

for i in range(len(_ALpitchers)):
print(“coordinate:”, _ALpitchers[i], “label:”, labels[i])
plt.plot(_ALpitchers[i][0], _ALpitchers[i][1], colors[labels[i]], markersize = 10)

for g in range(len(_NLpitchers)):
print(“coordinate:”, _NLpitchers[g], “label:”, labels[g])
plt.plot(_NLpitchers[g][0], _NLpitchers[g][1], colors_two[labels[g]], markersize = 10)
plt.scatter(centroids[:,0], centroids[:,1], marker = ‘x’, s= 150, zorder = 10)

## Midterm: Continuation of Vinyl Project

For the midterm, Scott and I decided to continue working on the project we started a few weeks ago referenced here.

We really love the notion of an AR app that serves as a compendium to a vinyl purchase. In our minds, a user would go to a record store, buy a vinyl and receive both the download code for the records MP3’s – a practice already being used – and a download code for  this app. Each app would be specifically catered to the vinyl purchased. We already vaguely fleshed out this idea with Bowie so we wanted to try a different album: Radiohead’s Kid A.

Rather than approach this album thematically like we did with Bowie, Scott and I wanted to work from a more technical standpoint. Scott was really interested in API integration while I was interested in a more immersive AR experience. Scott likely goes into the API work in his post so I’ll focus on the immersive nature.

As of now, I think it’s really jarring to see AR images overlaid on top of other images. After the novelty wears away, it just seems kind of hokey. I’m interested in AR experiences that make the user look twice. For example, for this particular album cover featured below, I wanted to augment the actual mountains.

In my head was an experience in which a user would point their phone at the album and seemingly see …the album cover and nothing more. I wanted them to feel as if the app was broken or wasn’t working. Then I wanted something to happen – the ice on the mountains begin to melt, the sky begins to move – that made them look closer.

For Kid A, Radiohead released a dozen or so short videos, all of which are featured below in the compilation.

Scott and I took the video featured at 9:14 and figured it would be perfect to augment on the cover. Scott took the album cover, made the mountains transparent in after effects and placed the video of the moving mountains over it. The result was the following:

While the rest of the project is done, I don’t have video documentation as Scott has the album tonight BUT I can describe what else was included:

• A full video augmentation of the album cover starting with above and ending with other Kid A blips
• A back album that uses Vuforia Buttons to toggle between the influences of the album. For example, touch one button and see which Charles Mingus album influenced Kid A, touch another and see which Aphex Twin album influenced Kid A, etc.
• An API with 3D text that shows related artists you could listen to if you’re into this particular album
• Videos from the band describing the process of recording Kid A.

Excited to show you all of this tomorrow!

## Baseball: Object As Container

For my object I decided to use a baseball. At first thought I was reticent to use this because at first glance it doesn’t seem to have too much personality. After all, the object is mass produced with the intent of being the same in every aspect. However, there’s something about a game-used baseball that does give it plenty of personality. If you’ve ever caught a foul ball for example, there are scuffs where the bat has made contact that gives it a distinct feel. Each scuff builds to the creation of a memory of the moment in which the ball was caught. This isn’t to mention the symbolic nature of a baseball itself. A fan of the game, upon seeing a baseball, instantly associates it with something much more than cork wrapped in yarn. Alex Zimmer and I wanted to use this as a jumping point.

Originally we had two ideas: imbue my object with a memory or augment a baseball scorecard. The latter was to be a backup option lest Vuforia’s object scanner fail to work properly.

Let me start by saying that it isn’t so much a pain in the ass to use Vuforia’s object scanning software, it’s more of a pain in the ass getting on an Android Phone. Vuforia only allows this to be loaded on to Galaxy Note 6’s and 7’s. Luckily, work has a Galaxy 7 that I was able to borrow. After spending like 20 minutes trying to figure out how to get the SDK software onto the phone – I’m a lifelong Apple user – I went home to try to scan the baseball. After opening the software and wondering why it wasn’t working I checked the documentation only to see that apparently a piece of paper is meant to accompany the scanned object.

I went to school, printed the gray piece of paper and tried scanning with Alex. Our first scan was sort of successful as we gathered about 200+ points. We found we had some success once we tried to augment a cube onto the object but it was really inconsistent. Alex and I thought if perhaps we wrote something on the ball in permanent marker, this would give the object scanner more to work with. I got a brown sharpie and wrote “Home Run” on one side and “Dad and I” on the other. This scan revealed about 400+ markers and was a lot easier to pick up in Unity.

Now that we had the object scanned we knew we’d have success augmenting it the way we wanted.

At every baseball game a fan walks away with a ball as a souvenir. Sometimes the ball was a foul, sometimes it was a home run, sometimes a player tosses it to someone. Either way a fan usually ends up cherishing this ball be it for the remainder of the game or for the remainder of their lives. Alex and I wanted to take the means in which the fan received the ball and make it a memory of the ball itself by augmenting it.

Alex and I found the above clip – a very famous home run hit a few years ago – and made sure it was from a fan’s perspective (we couldn’t find one from the perspective of the fan who caught the ball) and augmented it on the ball. Having merely placed a plane with a video player on the ball wasn’t enough though. We wanted to give the impression that the ball that we were augmenting had the memory that caused the fan to receive it inside of it. Almost like an egg or a …pokeball. The final result is attached below.

# "There's a Starman, Waiting in AR"

For this project, Scott Reitherman and I teamed up to create an “app” that we’re actually really excited about. Both Scott and I are big into music; we collect vinyl, talk about album histories and enjoy introducing one another to new genres (Scott’s more ambient and I’m more funk). We thought it would be cool to create an app that brought the stories of albums to life; one that gave the listener a greater sense of what went in to making a particular album. There’s a small book series called 33 1/3 which tells the entire story of an album in a small 100ish page novella and I think we thought it would be cool to essentially turn one of those into an augmented reality experience that took place in and on the album that was being focused on.

I think it all started with Bowie. Scott approached me after class and said he had an idea involving augmenting vinyl. He mentioned maybe doing this with a Bowie album. Being a big fan of the Thin White Duke myself, I thought this was a great idea. I added that rather than just showing users random images from when the album was made it would be interesting to have those images tell a story in an of themselves. For example, if we did Bowie’s last album, “Blackstar” it would be cool to have the augmented images slowly fade into death as the user advanced through the album.

Scott and I settled on an vinyl he owned that was called Ziggy Stardust: The Motion picture. We chose this album because the inside had separate images that we thought we be better to augment other images on.  Here is the final product (a correctly oriented video will be updated soon, just wanted to get this blog post up):

## Unity Does Not Like Videos...

There were a lot of difficulties making this project all centering around adding video to Vuforia. While getting the proper scripts in place wasn’t too difficult, getting the video to play along with the audio was an extremely laborious task that took up 90% of the projects time. The issue seemed to be with the codec and dimensionality of the videos. They all needed to be 640×360 and if they weren’t the audio would play but not the video. When we tried re-orienting these videos in Premiere we would also have issues where the video wouldn’t play (but the audio would) while holding your phone vertically, but would play properly while holding your phone horizontally.

We also ran into issues with how the video was uploaded in Unity. At first we thought it was ok to drag and drop the videos (mp4’s) into the Streaming Assets folder but then we realized we couldn’t make any adjustments to them. Then we tried uploading the videos into the assets folder which seemed to be the correct move, or so we thought. Turns out the video needed to be transcoded (thanks to Gabe for the help with this). After THAT was figured out, we still had some issues but after a bit more trouble shooting we got it to work.

## Future Improvements

Scott and I spent so much time making sure the videos work and that we had the proper content that we weren’t able to complete what I think is one of the more crucial elements of the project: additional story-telling. While the evolution of the augments follows a particular story, there are certainly subtleties that I want to imbue in the project that I think we could’ve done with more time/less technical issues. For example, I love the video on the cover, I love the story it tells; it’s a perfect introduction. However, I wish it had smoke coming from the top of it to tie in the fire coming from the cover. I also wish the videos would get more frantic and cut in and out more to show Bowie losing his mind as his success increased.

Overall though, I’m really pleased with what we got and I’m eager to take the next steps with this app as I think it could be an interesting, non-novelty-driven exploration of augmented reality.

Lab:

This past Friday Chris Hall, Manning Qu, Oriana Neidecker and I got together to clean up our data. Each of us took turns sitting in front of the computer and getting to know the software a bit better. We started by bringing a simple walking animation Chris did with her group the week before. Due to her hair being in a pony-tail, a lot of the data around her neck was missing which was a good thing because it let us figure out how to use the software properly to clean up the data.

After the walking animation was cleaned up, the other three members head out and I was able to take a look at a horseback riding animation that I had recorded the previous week. The good news was that this information was totally clean and required no additional work but this – may have – led to complications down the line.

Workflow:

Overall, the process of using MotionBlender was pretty easy thanks to Todd’s great tutorial. The one thing that was a bit over my head was that the wrists don’t seem to be animation properly. In the attached video you can see that the forearms and wrists aren’t operating in the way that the skeleton is. Rather than fold at a 90 degree angle, the right arm juts out a bit more which makes it look like the character is petting something next to him as opposed to his own horse.

Final video:

While I’m happy with the fact that the animation plays well, there are definitely more con’s than pro’s in this first video; mistakes that can be chalked up to my first time using the software’s. For example, the scale of each of the different objects – the character, horse and bison – are all very off as is the scale of the foliage. The graphics card in my Mac is also preventing the character from appealing smoothly, instead it looks like a character from Goldeneye.

With that said, I’ve really enjoyed the process and look forward to learning more about the various softwares we’re using.

## Projection Mapping

One of my favorite films of all time is 2001: A Space Odyssey. The notion of a prescient monolith delivered to earth by an evolved species is just of endless fascination to me. I thought it would be cool to explore this a bit further for my first projection map.

A famous line from the text is the final words Dave Bowman utters before he enters the monolith: “My god, it’s full of stars.”

I thought it would cool to utilize that period at the end of the sentence, using it as an entrance into the monolith.

In an ideal world, the sketch featured above would take up an entire museum wall. The period at the end would be about 7 feet tall by 5 feet wide and the projection would be pointed at the period. (Man, as excited as I am about this projection, it sure does take the appeal out of it when you film it in your dinghy apartment.) That way, there’s a sort of shock when the black period “opens up” and the user enters into the world of 2001.

Overall, I’m pretty happy with how things turned out especially considering I’m still learning Unity. The most difficult technical aspects were making sure the video and door animations worked properly (I admittedly had some help with the Lynda tutorials for the latter). While I’m certainly underwhelmed by seeing the projection on a crappy speaker in my apartment, it makes me really excited to see what it would look like in a more ideal setting. Either way, it’s made me very eager to explore the world of projection mapping a bit more.