Virtual Reality Tour of MAL
Founded in 2009 and generously supported by the Department of English at the University of Colorado at Boulder, the motto of the Media Archaeology Lab (MAL) is that “the past must be lived so that the present can be seen.” Nearly all digital media labs are conceived of as a place for experimental research using the most up-to-date, cutting-edge tools available. By contrast, the MAL – which is the largest of its kind in North America – is a place for cross-disciplinary experimental research and teaching using obsolete tools, hardware, software and platforms, from the past. The MAL is propelled equally by the need to both preserve and maintain access to historically important media of all kinds – from magic lanterns, projectors, typewriters to personal computers from the 1970s through the 1990s – as well as early works of digital literature/art which were created on the outdated hardware/software housed in the lab. But the MAL's website has yet to fully encompass the playfulness of the laboratory space. Every machine is functional, allowing students to tinker with tech ranging from computer interfaces to phonographs--but in contrast, the website is limited to 2-D information and images. Master's students Erin Cousins and Jillian Gilmer, inspired by the construction of knowledge via unguided play, hope to create a virtual reality tour of this space that imitates the freedom and creative openness of the Media Archaeology Lab itself.
About the Project
Founded in 2009 and generously supported by the Department of English at the University of Colorado at Boulder, the motto of the Media Archaeology Lab (MAL) is that “the past must be lived so that the present can be seen.” Nearly all digital media labs are conceived of as a place for experimental research using the most up-to-date, cutting-edge tools available. By contrast, the MAL – which is the largest of its kind in North America – is a place for cross-disciplinary experimental research and teaching using obsolete tools, hardware, software and platforms, from the past. The MAL is propelled equally by the need to both preserve and maintain access to historically important media of all kinds – from magic lanterns, projectors, typewriters to personal computers from the 1970s through the 1990s – as well as early works of digital literature/art which were created on the outdated hardware/software housed in the lab.
But the MAL's website has yet to fully encompass the playfulness of the laboratory space. Every machine is functional, allowing students to tinker with tech ranging from computer interfaces to phonographs--but in contrast, the website is limited to 2-D information and images. Master's students Erin Cousins and Jillian Gilmer, inspired by the construction of knowledge via unguided play, hope to create a virtual reality tour of this space that imitates the freedom and creative openness of the Media Archaeology Lab itself.
Risks and challenges
We anticipate the risks and challenges of this project will lie in working across interdisciplinary lines, learning how to use our hard equipment, and testing the limits of our software's capacity.
Our project isn't just tech-specific - it is also tech dependent. To create the virtual reality tour that we want, we need a fish-eye lens to use with our software. As photography noobs, we didn't realize (until informed by a well-meaning, more informed person) that those babies cost upwards of 1200 dollars! Room in the budget, we have not (still working on that "Funding the Project" step).
We set out to find resources on campus that might rent out equipment like a fish eye lens - and we found some! Unfortunately, they were departmentally specific and only accessible by students working on projects for classes within Film or Art History courses. Film and Art History students, we are not.
After brainstorming strategies (our faculty member can plead with the departments to bend the rules!) and possible last resorts (fish eye app on an iphone?) we got a deus ex machina in the form of a classmate whose DSLR has a built-in fish eye app. Thank you Jaime! Lifesaver!
So we're still in the gathering / prepping / inital troubleshooting phase. Soon we hope to be shooting footage and assembling the project itself!
Because we're still waiting on funding and equipment, the actual forward momentum of the project is sort of in limbo - but that means there is more time for tinkering and play! Jill went into the M.A.L. to play with panoramas and familiarize herself with the stitching process that we'll be using with the virtual reality software, and took some sound clips while she was there.
We're hoping that the virtual tour will be a multimedia interface - we want to give information, but also a multisensory experience of what the space is like. The crackling of the phonograph is just as essential as images of the hardware!
To experience a few of the MAL's disembodied sounds, follow the links below:
Because our project relies heavily on specific technology (virtual reality tour software, to be exact), our budget was mainly determined by the cost of that tech - and to be honest, it was higher than we expected!
After researching different virtual tour platforms, we realized that the difference between Basic and Pro packages was also a significant difference in features. Pro packages were often the only way to embed multimedia (video! we need video!) and use what we think might be by far the coolest feature - 3D object capabilites.
Being poor grad students, we don't exactly have the cash flow to fund our expensive software needs. Our answer? Go to the internet! We're planning on creating a Kickstarter to (hopefully) cover some of the costs and get us on our way to making 3D media artifacts into a (virtual) reality.
Jill here, ready to update the world on our team's haphhazard experimenting in the media laboratory.
After spending time recording sound, including the lab's Edison phonograph and out-of-tune piano, I decided to play with the panoramic-stitching software. After all, as digitial humanists, we know the value of "messing around" in the conception of various projects (to quote Matt Ratto), and the first filming attempt was extraordinarily fruitful as a trial-and-error sesson.
Here's what I've learned: photo-stitching is a delicate process. As you can see from the attached photograph, the panographic "bubble" isn't perfect--there are obvious blurs where the program spliced together a series of images. Evidently, the software had an easier time with straight, bold lines... and while it succeeded in erasing me, it created a whirlpool ghost in my stead.
Pros: Even if we get no software, no equipment, and no funding whatsoever, we'll at least be able to get this far--and with some editing software and beginner's luck, this image bubble (and the numerous others I created) could function as a base for the project. Projected cost: $1.99
Cons: The software is limited by its lack of interactive ability. I also disliked the physical process of using it: I was spinning around the room 25-30 times per "bubble," weirding out the lab workers and myself. And the stitching process itself was largely computer-operated, leading to an undesired patchwork look.
Hopefully Jaime's camera will function a little better. That said, it was pretty fun to actually become "immersed" in the image. Check out the attached link, which will allow users to "look around" the MAL's storage space.
Our Kickstarter page finally went live this afternoon! This was more work than we initially anticipated, forcing us to physically outline the projected budget, brainstorm re: online marketing strategies, and design Kickstarter rewards for our supporters. The last item was particularly difficult to conceptualize: as financially devoid graduate students, our funding--and more importantly, our time--is extraordinarily limited. What kind of tangible "reward" or "prize" could we possibly offer to secure donations? In this situation, our scholarly appreciation just isn't going to cut it.
After much discussion, we decided on some MAL-themed stickers and blank floppy disks. We would have loved to provide floppies with bpNichol's poetry, but corporate-savvy Erin pointed out that putting bpNichol's material up for "resale" could be a potential legal issue in the future. In the end, this was not a huge sacrifice--after all, how many people still own computers with floppy drives?
At this point, the path before us is clear. If we can get our hands on the necessary software, things will start falling into place. Now we just have to publicize the project, market ourselves with social media, and play the waiting game. No biggie, right?
To read about our project in detail--or if you'd like to donate yourself--please follow the link below to the live Kickstarter page: https://www.kickstarter.com/projects/1836002391/virtual-reality-tour-of-the-mal
After less than 48 hours, our Kickstarter project received full funding from a myriad of supporters (with a buck or two to spare). We can't express our excitement and appreciation for the communal initiative! It feels good to work on something that the Internet has declared worthwhile.
Now that we've acquired funds, it's time to plan out the budget.
This amount may seem troublesome in light of our relatively small Kickstarter goal, which is partially due to our lack of crowdfunding experience ("If we ask for too much money, no one will give!") and partially due to the project's rapidly expanding cost. But fortunately, Kickstarter is only one funding platform of many. Interestingly, we were also contacted by CU administration, which asked us to consider their own fundraising platform for our project (application submitted and currently pending).
It seems there's quite a bit more interest in this project than we initially imagined! With the aid of our mentors, our institution, and our loving crowdfunding community, this virtual tour is well on its way.
Jill here! Today I met with Mia Fill, Crowdfunding Program Coordinator at CU-Boulder, to discuss the MAL project in detail. Mia located our Kickstarter, and given the project's speedy funding success, she contacted us directly via email to offer an internal, university-affiliated crowdfunding platform and discuss its potential benefits for the MAL.
Although I submitted an application to the crowdfunding site immediately, we still haven't heard back from the higher-ups. CU's infrastructure results in a hierarchy of approval, which means things move at a snail's pace. However, the platform also offers a myriad of benefits, most importantly related to funding. Although our project has not yet hit the point of hands-on money-transfer, it soon will, and this process has the potential for disaster. As a backstage hand in CU administration, I have experience dealing with reimbursement from both sides, and it's a hectic process--especially when the money is coming from a multitude of public and private sources.
This is where CU's platform seems well fit for our purposes. The reimbursement process is wholly internal, linking directly to CU accounts and speedtypes. Any funding from this avenue would save us--and our department's beloved office manager, Vicky Romano--an unforseeably large amount of time in the future. And although it's discouraging to butt heads with infrastructure so early in the process--for example, CU (politely) refused to help advertise our Kickstarter campaign, which I initially suggested--it seems like a worthwhile investment given the platform's wide-ranging academic audience, largely composed of students, faculty, and alumni.
Hopefully, we'll hear back from Mia about the application soon, but the project is gathering momentum. Filming is just around the corner!
This week, we checked in with the English Department's savvy office manager, Vicky Romano, who would handle the money-transfer and reimbursement process. In an effort to advert administrative crisis down the road, we consulted Vicky before making our large purchase.
Vicky confirmed that accepting money from Kickstarter would be a pain--and transferring public crowd funds into a CU account is out of the question. Simply put, the money transfer is a one-way street: I can take university money, but the university can't take mine. The easiest way is to purchase the software from a personal account, wait until the crowd funds come through at the end of the month, and use Dr. Emerson's research fund for the remainder.
This unfortunately means we'll have to front the entire sum--not a measly figure, as we've determined in earlier posts, but split between the two of us, we'll scrape by! We're incredibly excited to get our hands on the product and play around with its interactive features.
After consulting our calendars, we decided upon to film on November 18th from 5 - 9pm. Unfortunately, we planned this day more than a month ago, and in that time, the MAL's Open House hours were updated on the website. We showed up, gear in tow, to an empty, deserted building. Whoops.
We now face an unforeseen issue: our time in the MAL is limited. With Dr. Emerson traveling for academic events and Fall Break right around the corner, we somehow forgot that access would be crucial to our project. A week's delay doesn't seem very destructive, but we've alotted significant time for software tinkering over break. If we can't film, we can't tinker, and we'll be stuck with nothing to do while we wait for university life to resume.
Immediately, we inquired whether the English Department had a key to the laboratory (no). In a state of gentle panic, we emailed Aaron Angello and Maya Livio hoping for a miracle--and boy did they deliver! Although everyone is preoccupied with holiday plans, Aaron agreed to meet us at the MAL on Friday, November 20th (a mere two days behind schedule--much better than ten). We're lucky to know such friendly scholars.
Aaron let us into the lab this morning around 11:00am, where we encountered a few technical issues with the hardware (see related post DSLR or Duct Tape?). After making the call to utilize an iPhone, we got to work creating our virtual "bubbles." This part was equally fun and frustrating. Unlike our previous test run, this attempt included a tripod, which greatly improved the picture quality by writing out the shaky human hand (post-humanism, anyone?). Unfortunately, the technology wasn't perfect either. It's difficult to explain this, so bear with me, but the live-streaming-image was disconnected with the images being produced, leading to a host of awkward, failed bubble-spaces. That said, the entire filming process took maybe 90 minutes, and we're (fairly) happy with the results. While it offends our OCD nature, the bubbles themselves are fun to play with and should serve well as a beta-base for the project.
On filming day, getting the camera equipment proved more difficult than expected--and no surprise, due to a bit of miscommuncation and the upcoming Thanksgiving break, we simply couldn't meet up with the lovely Jaime to exchange hardware.
This led to a rushed search for alternatives. We immediately ran (or more literally, biked) to the English Department to check out the resident DSLR. To our dismay, it lacked any sort of fisheye capacity, and moreover, it failed to attach to Erin's tripod. Our second thought was to use my iPhone, which produced our test shots initially (see post entitled Test Run). To make the phone compatible with Erin's tripod equipment, we decided to call our local hardware stores. One store offered an iPhone/tripod converter for an additional $70--no thanks. Another suggested a $20 selfie stick with a tripod adaptation, which we found tempting. However, since the lab workers specifically opened the MAL for us, we didn't want to waste their time (and ours) by going on a spontaneous shopping trip. In addition, there was no guarantee the newer tech would work.
Rather than burden our already overstretched budget, we decided to ask the resident lab worker for duct tape. Why spend money and waste time when we can be creative instead? Check out a photo of our improvised set-up in From Theory to Practice: DH Barriers.
Using this equipment was naturally frustrating--the duct tape prevented rotational movement and limited access to the screen's buttons, necessitating a constant tape and re-tape process. Still, we managed to get the hang of things, and left the MAL with a beta-set of panoramic photographs.
Although Jaime's camera is still available for use, our rather tight schedule prevents us from filming again until approximately December 1st. To get the project ready to "demo" by our course deadline (December 7th), we'll need to sacrifice the DSLR dream and move ahead with the completed iPhone photos. This is a difficult concession: we're both passionate about making this tour as professional as possible. However, we comforted ourselves with the following facts: 1) we're just beginning this project, and like any large undertaking, there are bound to be delays, 2) we can simply reshoot the photos with higher-quality equipment at a later date and update the site accordingly, 3) the iPhone camera is (believe it or not) almost as good as a DLSR these days. Thanks Apple.
So there you have it! We've got the base of our project, and though it's a little more "beta" than we were hoping, it's a workable start. Next on the horizon is purchasing software, editing the photos, and getting these panoramas up and running.
Erin here! In our digital humanities course thus far, we've talked a lot about the potential the Digital Humanities have to expand traditional humanities and break down traditional barriers - we've also discussed barriers that are inherent to DH, structural and cultural walls that might deter some scholars from taking the DH route in their research and scholarship: departmental bureaucracy and access to funding, coordination between collaborators, technical prowess, access to necessary technology and lab space, and the fact that DH is often process rather than product oriented (even when there is a final product), making it hard to visibly and fully demonstrate work for evaluation.
This project has been the first real attempt at DH making for Jill and me, and I think we can unequivocally say that in our case, all of our theoretical discussions have made themselves empirically present as tangible, often frustrating barriers which require some quick adaptation on our parts in order to move the project forward.
Friday was our first shot at getting raw footage (see Build-in-Progress Filming Day: Attempt 1), and I was repeatedly struck by the surreal feeling of seeing once theoretical problems manifest as real issues:
I in no way want to imply that Jill and I were somehow passively attacked by the reality of DH practice - yet it is fascinating to wade through some of the more seemingly mundane or small-scale (but actually hugely impactful) issues of making a project a reality. It really seems to require a different set of skills than traditional Humanities projects, especially those found in English courses. Instead of being wrapped up in the tunnel vision of a final product, we are forced to be constantly aware of and adapting within process. Instead of researching the work of those who have come before, we are researching ways to do work we’ve never encountered. Instead of flexing our familiar analytical writing muscles, we’re testing out new, making-oriented modes of scholarship - which inevitably means some stumbling along the way. Here’s to breaking down barriers with teamwork and tiny scissors!
Erin here! As we’re moving forward with the Virtual Tour, one thing we have to start thinking about is how such a project should be evaluated within an academic system. Normally, a seminar in the English department would culminate with a conference or seminar paper - a whopping 20-30 pages of individual work, measurable according to traditional standards. Professor assesses paper, student gets grade, everyone moves on to a new semester.
But what are the traditional academic standards for a Virtual Reality Tour?
Since there isn’t a stabilized Humanities protocol for this type of project, evaluation gets tricky - and this isn’t an issue that is relevant only to projects like ours, that have an explicitly digital, tool-based final product, but for projects positioned all over the “making” spectrum of the digital humanities. How do you evaluate the quality of digital poetry? How do you put a numerical value on an online visualization of lab spaces?
For our course, we’ve decided to engineer our own standards and submit them along with our work for evaluation. As such, my mind is mulling over the possible categories - should we be graded on hours put in (seems oddly quantitative)? Should we be assessed based on the quality of the final product (which, this being our first experience with the technology, will probably be in-progress and still being improved after the deadline)?
Shannon Christine Mattern took on this issue in “Evaluating Multimodal Work Revisited”, published in Vol. 1, No. 4 of the Fall 2012 edition of Journal of Digital Humanities (originally published on Mattern’s website here). Her article strives to create a “single, (relatively) manageable list of evaluative criteria” out of categories like Concept & Content, Concept/Content-Driven Design & Technique, Transparent, Collaborative Development & Documentation, Academic Integrity & Openness, Review & Critique.
Many of the questions that Mattern puts forward seem like they will be essential for the evaluation of our project, and might give us some guidance in creating our own standards for that evaluation, especially when it comes to Collaborative Development & Documentation and Academic Integrity & Openness. A large part of our project has been reflection on the process itself (hence the Build-In-Progress!) and a focus on the collaborative, experimental nature of our work in respect to our previous experience. I’m less certain about the applicability of “data” centered questions, and it will be interesting to connect our final product to traditional scholarship—largely, I think, that connection lies in the embodiment of the theoretical discussions we’ve been having all semester.
Will update soon when we’ve set our standards! For now, we keep working and reflecting. Another difference in DH scholarship from my past experience - like the standards for evaluation, the end product of our project is not finite, but constantly shifting.
The first step to progressing with the iPhone photographs is to edit them--after all, while the iPhone technology is decent, we've already established that it struggles with shadowy areas, duplication, and blurry "splices" where things went wrong in the photo-stitching process. In an attempt to undo the damage, I upload the photos on my computer and began playing around.
I fixed all sorts of things, but most problems wouldn't be noticeable without reviewing the original and edited photos side-by-side. The program seemed fond of adding an extra chair leg, for example. The biggest issues were photographical errors around the machines themselves (thankfully, there were relatively few of these). A warped computer screen, typewriter, and television sucked up many hours of my time as I attempted to fill in gaps without altering the physical look of the tech.
I ran into another unexpected issue as I ran through the images sequentially for the first time, simulating the process of walking through the laboratory. Although the images are now "clean," the lighting is slighly different in each--making it seem as though the lights are flickering as the tour progresses. Oy!
I'm not sure how we'll tackle the lighting issue, but my photo-editing software is at capacity for the moment. Now it's time to test this batch of photographs with the actual professional software. Wish us luck!
Although it only took a few moments to actually order software, it took weeks (months upon months, at this point!) to reach this point. It feels great to finally get our hands one of the tools required for this project. (Now if only we can get our hands on funding...)
Naturally, we encountered a few unexpected issues downloading the software:
1) We can only download the program onto one device (multiple device functionality is an extra hundred bucks). What device to choose? Dr. Emerson's computer? A school laptop? A personal desktop? I went with my own laptop--hopefully Erin and I will be able to share effectively as the semester progresses so that the work doesn't fall to one party.
2) I assumed I would be able to play with the product immediately, so imagine my surprise when the confirmation email simply asked me to wait 48 hours. I found this odd, but finally decided it was a safety precaution for online sales--if they gave me the software key today, my credit card payment might be declined tomorrow, and they'd be out nearly a grand in projected sales. To sum, money is (once again) at the root of our project's delay.
In the meantime, I can merely hope that the edited panos function well with the program, wince over my now empty wallet, and hope that the university reimbursement system acts quickly.
Well, team, we've hit a point where the project will have to progress along one of two lines: either we proceed with the photographs we have to create a "rough draft" for Demo Day, or we cease work on the low-quality photo base and spend our time reshooting the space.
Unfortunately, the photos aren't working as well as I'd like them to in the program itself. This is partially due to the prior editing software, which reduced their quality significantly. Sharp corners have become blurry; individual objects lose their detail. As I "immerse" myself in the virtual space, I also find myself wanting more bubbles in the larger rooms, which would require at least a partial reshoot. There's nothing worse than standing 10 feet away from a blurry machine, unable to see it (much less interact with it).
The software itself is remarkable--the features have allowed for a combined audiovisual experience, and this functionality will only improve as we spend more time creating interactive "hotspots." However, we'll need to spend more time on the foundation of our project and iron out the photography wrinkles. A reshoot will also allow us to utilize non-iPhone technology and stitch the photographs ourselves.
While a reshoot will certainly take place, the reality of our university deadline (and the nearing end of the Fall semester) is pushing our previous work schedule off course. We'll need to decide whether the school deadline supercedes our professional schedule--and divvy up the remaining time accordingly.
As a preface to this blog post, it is currently unclear whether or not the "cons" of this software are related to technology or to my own lack of technological talent (most probably the latter). With that disclaimer out of the way, I'll now summarize this week's experience of playing with foreign software.
1) The software had no issues incorporating the photographs we created with the iPhone. We are incredibly lucky to have avoided this hurdle.
2) The software incorporates .mp3 sound clips easily.
3) Audio clips play continuously when users move from bubble to bubble, creating a sense of smoothness and "walking about" in the tour.
4) Everything can be customized, from the directions, to the linear space of the tour, to the interactive hotspots, to the loading screen, etc. We have room to be creative in our presentation.
1) The software has a "walk" option for moving in-between bubble spaces, but this option looks much nicer in published tours than it does in the MAL because the computer doesn't know where to walk. Rather than orienting linearly toward the next bubble, our tour preview has shoved users through walls, chairs, and a myriad of other objects, refusing to take the clear walking path down the middle.
2) The images themselves aren't 100% stitched. The far left and the far right do NOT meet, which leads to a rather static, 180-degree, turning-head-from-left-to-right motion rather than a full-fledged 360-degree space. (This may be an issue with the quality of our tripod, which was not technically created for panoramas.)
3) Embedded video has yet to function successfully in the tour. I've attempted several different viewing platforms (YouTube, Vimeo, etc.) with no luck--the final product results in an endless loading screen in the middle of the tour.
1) The starting point of the panorama ON FILM DAY matters. It shouldn't, but the program uses the starting point as an orienting device, and because ours were inconsistant, the "walk through" options don't function well.
2) The lighting is a much less significant issue than I'd earlier anticipated. The transitions make the lighting change almost unnoticeable.
3) We need more play time! The software keeps revealing new features at every turn, and the more time we spend utilizing it, the more treasures will turn up.
We're very grateful to have been featured on the Build-in-Progress main page this week! The tool has proved itself invaluable in documenting our project's trials and tribulations. In the world of digital scholarship, especially, there's so much focus on creating a corporate end-product; this incredible site allows us to thoroughly document process, making the project's battles visible to the public at large.
That said, we'd love to see a few other features added to this program. One major problem is a recurring bug from the third tier on--we find ourselves unable to edit posts beyond the second tier. This led to a very frustrating hour of playing; I finally dragged the blog post up to a higher tier and was able to access its content.
Another issue: we can't upload more than one picture or video per post! This is a bummer. We have so many great images, recorded sounds, and videos, and we'd love to embed them in the actual progress site.
Lastly, the "tiered" system has become rather tiresome and reflects a linearity of process that does not exist within our project. It functioned well in the beginning, but with 15+ individual steps, it's becoming difficult to select a "position" in the map that reflects parent-steps accurately. For example, the photo-editing post falls under the umbrella of hardware, software, and physical laboratory space--but the program does not allow for multiple parent-steps or non-linear construction of process. Ideally, we would like more usability here, creating a "web" of process rather than a historiographical timeline.
It's a strange process to review a tool that is so central to our project's development (a process Dr. Emerson recommended), but scholarly brainstorming and suggestions aside, the Build-in-Progress tool is an excellent fit for our documentation needs.
Erin here! A huge part of this project has been learning new skills, whether they be tech-related, collaboration-related, funding-related, or advanced mastery of duct-tape. In putting together the Virtual Tour, we're trying to make an interactive experience by adding supplemental components like sound and video, so I thought I'd try my hand at making a basic but polished welcome message. Such a video, of course, required some video editing, which I have zero experience with.
We shot a short clip of the M.A.L.'s founding director, Dr. Lori Emerson, welcoming digital visitors to the lab. The vision in my head was simple: quality M.A.L. graphic, smooth transition (fade to white?) to Dr. Emerson's message, and fade to black.
I didn't realize (anyone who has any experience with video editing, please be kind and suppress your laughter!) that transitions need time, and replace the clip itself to get that time. I hadn't shot extra footage at the beginning or end of the message, so the end result is a video that seems like it's rushing to do its job and get out of the viewer's line of sight. Not a huge setback, but another reminder that projects like these (especially when taken on by newbs like me) require a lot of troubleshooting. Fail better!
Next time: transition footage. When we hit round 2 of this tour, we'll certainly be wiser!
As the project due date approaches, we're beginning to draft a rubric for evaluating the final submission. Digital Humanities work faces a unique challenge on the evaluative plain: due to quickly evolving technologies, traditional humanities have a hard time keeping up with evaluative demands. How does the reviewing process alter when the project moves from a book to a digital piece?
Here's our first draft of what we imagine this formula would look like:
Note: This critera for evaluation is modeled after Shannon Christine Mattern's Evaluative Criteria in "Evaluating Multimodal Work Revisited", published in Vol. 1, No. 4 of the Fall 2012 edition of Journal of Digital Humanities (originally published on Mattern's website). Criteria have been altered, added, and omitted to suit the needs of this particular project.
Concept & Content:
Design & Technique:
Transparent, Collaborative Development and Documentation
Our grading scale is from A to F; the highest grade includes a meeting of all criteria as well as detailed plans for future objectives.
As we near the due date for the semester it's clear that our project, as shown by our Build-in-Progress, is still very much in progress. As you can see in the Crossroads? post, we've been torn between trying to fully understand the process of our DH project, and focusing all of our energy on creating a final project. Final product and attention to process are equally important to our project academically, and we've been eager to work through each step reflectively - this means, however, that a lot of the work we've completed so far will eventually be replaced by a more polished version.
In short, we made a prototype - a first draft, if you will, of a project that will be improved after the semester ends. Now that we've experienced round one of making this tour, we're wiser about problem areas and ready to take on round two. Even better, we've got a plan! In forming our criteria for evaluation, we also wanted to take into consideration how we can improve the project for its final stage. We took a look at all of our objectives (see the Evaluation post) and outlined how we can improve in the future.
Overall: Next Spring, we hope to reshoot our beta-set of panoramic photographs to improve the tour’s foundation and create accessibility to alternate mobile platforms. One Kickstarter supporter asked whether or not we would have Google Cardboard usability, and our answer was a (longterm) yes! With sharper images, our tour will be more than capable of being embedded on Facebook, Twitter, the MAL site, and transferring between desktop and handheld technologies. We expect to implement these changes next semester as we continue to tinker and professionalize the tour.
We will continue to work through the creative process through Build-in-Progress. Our final product will also continue to develop after the semester ends. Because we consider this iteration a prototype, we plan to adjust our methods and improve the tour as our knowledge and skill sets evolve.
We plan to reshoot the panoramas using DSLR and Panoweaver photo stitching to give the Tour a higher-quality base and address some blurring / lighting issues with the current prototype. Now that we have the software and can troubleshoot tour-creating issues, we will be able to create a final draft of the Tour more smoothly.
For Transparency and Documentation:
We will link to our own Build-in-Progress site once the final draft of the project has been completed; we also plan to incorporate links to projects from Jamie Kirtz’s students, as well as video of Vujak software with links to Brian Kane’s website. Ideally, the Tour can act as a hub to connect different projects being pursued through the Media Archaeology Lab.
The biggest improvement will be a quality photo base for the tour itself. We will be sure to update as the project develops through the semester (and beyond)!
After three weeks of waiting, we finally received work that the Arts and Sciences Board has approved our project. Woohoo! This means that CU will officially sponsor and promote the virtual tour across campus, Boulder, and alumni populations. It also means that we can do fun things like utilize the CU logo when we advertise.
While we're thrilled to get backing from our university, the timeline is dishearteningly slow. One of this project's most frustrating aspects is its total interdependence on others. As literature students, we're used to working individually: a luxury that allows us to set a quick pace. This project, on the other hand, relies on outside resources. We needed approval from my bank, Kickstarter, CUF, the English Department, the Arts & Sciences College, Dr. Emerson, the lab workers, and the general public at large. We're incredibly grateful to have received such resounding support, but undeniably, the collaborative nature of this project also results in miscommunication, wasted time, and long waiting periods.
Naturally, when CU notified us of the project's approval, the fundraising launch timeline was pushed back a month or two. I smiled when I read this email, thinking to myself that university infrastructure is comically ineffective (especially when paired with our Kickstarter initiative, which was fully funded within 48 hours).
Nevertheless, after discussing the university platform as a group (Erin, Dr. Emerson, and I), we decided that any help is good help. CU may be slow, but it's a massive engine with numerous resources at its disposal. Mia also mentioned writing a CU-Boulder Today article about the MAL tour, hoping to promote the project as a part of CU's accessibility campaign. This is only one of the many resources the University of Colorado Foundation can offer!
Conclusion: whether or not its methodology is ineffective, the university has resources we need, and they're willing to extend a helping hand. And since we're planning on extending this project well beyond the parameters of this semester, the timeline shift won't stand in our way.
Jill here! This week, I decided to experiment with Panoweaver 9 so that we're familiar with the software for our next filming attempt. I'm pretty pleased with the software itself, but my ability to use it has been disappointing thus far.
To explain what this software does, I'll venture into the land of metaphor: imagine a puzzle. None of the pieces fit and the images they portray are distorted. You gift these pieces to a computer (upload a ton of photos) and tell it to hammer the pieces together. This process actually worked better than expected--the computer is cleverly informed and spots recurring objects, pairing photographs with ease. Unfortunately, this process ceases to work smoothly after about 10-15 images. After that, the puzzle becomes too complex for the computer to assemble without aid. And let's not blame the poor machine... the puzzle metaphor functions well, but only up to a point. Unlike a puzzle pieces, these images have more than four sides and four corresponding attachments. To put things in perspective, an image requires five "matching" points with another image to stitch successfully. In a completed tour, images require approximately ten to twenty "stitched" fellows, depending on the size of the space. I'll do that math for you: 65 images * 5 matching points * ~15 stitches = approximately FIVE THOUSAND HAND-PLOTTED POINTS. And after fifteen hours (oh yes!) of plotting, I now understand exactly how not to use Panoweaver.
Things Not To Do:
1) Do not upload a photograph that isn't a fisheye (whether full frame, drum, or circular--I had to teach myself the difference).
2) Do not upload a photograph that isn't a wide-angle or panorama.
3) Do not upload more than ~10 images. This was a severe mistake on my part--I initially uploaded about 65 images of the MAL's front room to make sure I got every square inch of it. The software unfortunately didn't know what to do with so many replicated items, crashing every five minutes and taking 10-15 seconds to process every command.
4) Do not assume the computer is infallible. When assigning match points, pay special attention to the structure of the room: the computer struggles with windows, doors, and beamwork because they're in practically every photograph, so a human eye needs to sweep through the batch.
5) Do not upload iPhone panoramas: they tap out at around 180 degrees, which resulted in our beta-tour's rather flat turn-of-the-head sensation. In order to achieve full-fledged immersion, a level of photographic distortion is necessary (fisheye).
I was hoping to create a "test" bubble of the space by tomorrow afternoon--the day we present our beta-tour to our Digital Humanities class--but unfortunately, technological difficulties have prevailed once again. I can't wait to play with the appropriate hardware so that our software responds in kind!
Jill here! This week, I met with Mia Fill to discuss the next step in CU's crowdfunding process. Erin and I are very excited to learn more about our university's infrastructure and internal resources.
First thing's first: I needed to give Mia a tour of the actual laboratory space. She had never visited the MAL, and I had never given a tour of the space, so the experience was wonderfully enlightening for us both. Since Mia was born in the early 90s, most of the equipment was entirely foreign to her, and this turned a ten-minute tour into a half-hour discussion. Her overt fascination reminded me why I'm so passionate about this project--in spite of its (often) frustrating twists and turns.
Afterwards, Mia and I sat down to discuss the crowdfunding initiative. Step one? We need a video. This is not surprising news--in an online crowdfunding campaign, an eye-catching, to-the-point video clip is key. I suggested a video of Dr. Emerson, Erin, and I in the MAL discussing our project in detail. Mia loved this idea, merely stipulating that the video not exceed ~90 seconds.
Step two? Piggy back off of the Kickstarter campaign. We've already written a blurb about the details of the project, and we can simply expand upon the previously established advertising base. However, Mia suggested that we specifically advertise the project as a part of CU's current accessibility campaign. This led to a long discussion about potential deaf/blind accessibility in the future--how could we make the online tour utilitarian for a disabled population? This will take a bit of research (and possibly software).
Step three? Wait, wait, wait. Again, this isn't too surprising, but hearing the projected project launch date still hurt a little bit: February 1, 2016. Ouch. I have to say, we were hoping things would move a little bit faster, but Mia assured me that this deadline realistically reflects the pace of university initiatives across campus. Ah, well. Erin and I always knew the project would extend beyond the semester's parameters, and we'll both be around in February to troubleshoot the CU campaign.
What's next on our plate? Making that video. Unfortunately, it's finals week, then winter break, then New Years--so that process may be put on hold until university life resumes.
The Kickstarter campaign has finally come to a close. We're so grateful to our backers (who have continued to donate past our goal)! This morning, we provided an update linking to the Build-in-Progress site.
Now it's time to send out our physical and digital rewards. We have a list of backers, information, and various media knick-knacks--but during the horrors of finals week, it will take a bit of extra organizational power to get these babies sent off on time.
We're also incredibly thankful to the multiple donators who waived these awards with their gifts. You guys are the real MVP.
Next up? Making sure the funds transfer without issue and going down the deep, dark rabbit-hole of corporate reimbursement.
Jill here! Today, I met with Mia once more to set up a marketing calendar. This is not a step that we undertook when launching our Kickstarter campaign, which again speaks to both the slow speed and simultaneous thoroughness of university infrastructure. Once again, we're confronted with a deadline set-back: Mia expects the project will launch in mid-February at the earliest. Ah, well. Two weeks of extra prep-time will allow for a more effective campaign.
The biggest part of a marketing campaign? The marketing video, of course! Erin and I are brainstorming about how to structure our video to answer numerous questions in a short time span:
To keep from boring our potential audience, the video itself will not be longer than approximately 90 seconds. With the aid of video editor and film-savvy scholar Michael Piel, Erin and I plan to produce a clip that pitches our cause convincingly and preserves just a *bit* of the MAL's magic.
Interestingly, Mia also emphasized the importance of an effective email campaign: social media is great for garnering public awareness, but the largest donations come directly from alumni emails. To achieve the greatest success, we need to focus our efforts on both social media and "professional" online outreach. She also mentioned writing an article or two for the CU-Boulder Today, a local university news publication. I look forward to working with Mia to get these additional resources up and running!
In the meantime, our focus will be on telling a story with our campaign; our story, Dr. Emerson's story, and the laboratory's story. Hopefully, we'll be able to do that successfully without 'corporatizing' the MAL. Film date is set for 1/21!
A closed campaign doesn't mean our work is over--in fact, we recently wrapped up the long process of sending out rewards to our various backers. Thanks again to everyone who supported our project!
Erin spearheaded this process, hand-delivering merchandise to any friends or family that donated, hunting down addresses for Kickstarter's generous strangers, and sending out individualized emails detailing the preservation of various technological artifacts. We also encountered a somewhat bizarre process for the $1 level rewards, which required us to formally "send," via button, our "love and devotion." (We realize this is a checks-and-balances feature, but it still made us giggle.)
This also made us reconsider rewards for our upcoming CU campaign. The process of hand-mailing Kickstarter rewards, even to such a small number of backers, was wholly exhausting (and actually cost the project a series of shipping fees). In anticipation of the university's much larger campaign, Erin and I need to rethink physical rewards (say, a piece of laboratory merchandise) for digital or "in-person" rewards.
Erin and I met with the wonderful Dr. Lori Emerson yesterday afternoon to touch base after a hectic winter holiday. Dr. Emerson graciously agreed to meet with us on campus during her sabbatical to film a portion of the marketing video. Because she's also a well-known figure in the local and national academic communities, we also wanted to pick her brain: are there any on-campus departments, professors, institutes, etc. that we should specifically target? Any off-campus resources we should be utilizing? Any MAL events coming up that we can advertise in upcoming articles?
Dr. Emerson was, of course, an invaluable resource. Erin and I had already listed the English Dept., CMCI, ATLAS, ASSET, and Computer Science Dept. as potential campaign supporters. To this list, Dr. Emerson added Information Sciences, Media Studies, CU Libraries. She also suggested targeting the myriad of regional businesses and institutes devoted to digital education/art: CU-Denver "Emerging Media," Boulder Digital Arts, Boulder Digital World, Galvanize, and the Foundry Group--all local off-campus resources that have expressed direct interest in the MAL and its goings-on in the past.
Although we had the filming equipment ready, Dr. Emerson was a bit shy improvising on camera and wanted a bit more time to structure her responses. We'll be meeting with her again on Tuesday, January 22nd to film both the laboratory and her description of it.
In Future Plans, we mentioned a desire to work with Google Cardboard. This incredible software allows users to experience a fully immersive virtual space with merely a smart phone and an inexpensive piece of cardboard. Luckily, the Google Cardboard platform is open-source, sharing its "design manual" freely online. After doing a bit of research, we've narrowed down the tools and technical knowledge necessary to pull off a Google Cardboard MAL:
Software. We need the tools to create a virtual tour to-go. Again, Google and Unity supply these openly, and these materials have remarkably become the universal standard for VR applications. The available software places our project within "the Oculus family of VR devices," and will be compatible with consumer headsets such as the Oculus Rift Development Kit 2 (DK2), Samsung Galaxy S6, S6 Edge, S6 Edge+, or Note 5 handset. We have also downloaded game-building software for the Cardboard platform.
Digital literacy. Of course, with new software comes a new host of technological barriers, most of which will remain invisible until actually tested--but Erin and I are determined to take on the challenge. One issue we've already encountered is the Great Chasm between Android and iOS phone applications. Although inexperienced, I imagined that programming an "app" for Android and iOS simultaneously couldn't be too hard: in my mind, it should be a simple copy/paste. They were positioned as branches of the same tree, or perhaps slightly altered recipes (a chocolate and vanilla cake). Unfortunately, as it turns out, these competing technologies aren't even in the same food group. Realistically, we won't have the time to create both iOS and Android apps by the end of the Spring semester, so we'll have to choose. This is a tough decision with our limited programming experience, but the online community seems to lean toward Android applications (a Google-owned operating system), which functions naturally with Google Cardboard.
Experimentation. Virtual reality technologies are in huge demand in 2016, but most applications do not sport interactivity. Since our tour is all about technical interaction, this poses a problem: lacking the funds of the New York Times, much less twelve Go Pros with corresponding tripods and sound equipment, we will not be able to reproduce an interactive, photo-grade, virtual reality video. (NOTE: If you're interested in this amazing work, check out "The Displaced" and step into countries affected by war: http://www.nytimes.com/newsgraphics/2015/nytvr/#the_displaced. And if you don't own a Cardboard Viewer, it's only the price of a movie ticket: http://www.amazon.com/Cardboard-Virtual-Reality-Smartphone-Headband/dp/B010N2EYSI/ref=sr_1_2?ie=UTF8&qid=1454104908&sr=8-2&keywords=google+cardboard+kit). Summed concisely, a video tour--wherein the viewer "moves" through space--is outside our current financial capability. However, a photographic tour is possible. And I wanted to know: what kinds of interactive options exist for still images?
The answer: not a lot. Unity actually offers a wonderful manual on Google Cardboard applications, specifically game design, but this technology is limited to animation. I could build a cartoon laboratory, for instance--a virtual "workspace" (which is cool, don't get me wrong!)--but I can't reproduce the Media Archaeology Lab in the flesh. This is a bit frustrating. I'll keep researching head-tracking tech and see what surfaces. (I have a feeling we'll be seeing this tech emerge in the next 6-12 months, if not sooner.)
In the meantime, Dr. Emerson and I were thrilled to finally see a Cardboard MAL, even if interactivity is currently out of reach. Soon to come: a Google Cardboard sphere of the phonograph room, which sports film and radio equipment, a current PhD student's "scanner" project, and, of course, the beloved Edison phonograph.
Jill here, and I've got good news: my extra research has paid off!
Naively, I wrote off video interactivity in my last post (see Google Cardboard), thinking of our less-than-substantial finances; however, Google has come through once again, this time with the Google Street View Trekker Loan Program, which allows applicants from all over the world to borrow--for free--the expensive camera technology necessary to create professional photograph and video VR spheres. I was already agog at the level of open-sourcing we've encountered in Google tech, but this brings it to a whole new level.
Of course, I applied immediately. As a university-sponsored project, the MAL tour certainly qualifies as "professional" usage of the Trekker tech. (university functions were actually suggested on the site!). The one drawback: once again, with increased collaboration comes a decreased speed of process. Google apparently only checks these applications once per month, and they don't promise to respond to or even acknowledge every request. Neurotically, I'm now wondering how exactly the date of my application will affect its reception; I applied this morning (February 1st)--is it possible that they check applications at the beginning of the month?
Such questions can drive a design team crazy. But hey! The least we could do was throw them a line. Hopefully it won't be dead in the water.
I promised a Google sphere for the Cardboard users, and the time has finally come.
Click here to check out the MAL's phonograph room. (Note: if you don't have a VR headset, just use your desktop!)
I'm (slowly) learning how to clean up the program's stitching errors, so a cleaner version should be just on the horizon. Until then, enjoy this close up of the laboratory's 'phonograph' room, which features film equipment, Jaime's scanner archive project (read more about this scholarship here), and of course, the beloved Edison phonograph.
Last week, we met with Dr. Emerson and finished her portion of the film-making process (she is currently abroad for the 2016 Transmediale conference--safe travels!). Today, Michael Piel, Erin, and I met to review our footage and wrap up filming for our quick-to-the-punch marketing video.
Long story short, because we're a UCF sponsored project, much of the video's content will be dictated by university infrastructure. (The logos, transitions, and content must all meet CU's guidelines, for example.) This is not to say that the video's creative content is not up for grabs. Quite the contrary! CU encourages us to tell a personalized story with the campaign.
The challenge is making the video personalized in 90 seconds or less while still communicating the importance/growth potential in the project. Erin and I attempted to do this eloquently (knock on wood)--and now Michael, with an abundance of footage, will be in charge of setting a narrative trajectory. We hope to emphasize accessibility, but keep a playful feel. The shots we got should suffice perfectly (think unscripted conversation, laughter, and playing around in the MAL).
What's left now? Michael has the shots he needs, and now he'll get to work editing them. We should have a demo of the video by early next week.
Jill here! Today I met with Mia after a few weeks of researching, networking, and playing with digital software. There were several items on the morning's agenda:
1) The marketing video. Our filmographer, Michael Piel, anticipates being finished with editing by Tuesday, February 9, 2016, so sometime next week we'll be ready to roll! Mia reminded me that as a UCF promotion, the video will have to meet several university guidelines in terms of banners, transitions, and logos. We'll keep you posted and upload a link when the video goes live.
2) Corporate finances. Mia encouraged me to check with Dr. Emerson re: a corporate university funding account. If Lori has an account, it'll save Erin and I the arduous process of getting a separate account approved through CU. Either way, the project cannot progress without this step; without a money basket prepared, we can't exactly ask for donations! Once again, I plan to rely upon the sage wisdom of Vicky Romano, the English Department's office manager.
3) Campaign rewards. Erin and I have been brainstorming about the CU rewards campaign for some time. As mentioned in the Rewards blog post, the process of shipping merchandise becomes a real drawback in a fundraising campaign, especially when the project receives more than, say, twelve backers. As a result, we're thinking about options that don't require a shipping fee. Dr. Emerson suggested a personalized tour of the MAL, and this sounds right up our campaign's alley! Along with digitized artifacts, MAL merch, and Google Cardboard viewers, Erin and I feel confident regarding the general appeal and relative lack-of-hassle of our chosen rewards.
4) DATES! We finally set a "hard" date for launch ("hard" is maybe overreaching, as the project could be delayed further at any time): February 22, 2016. This allows me to mobilize regarding the marketing calendar; with dates, I can begin planning ahead and budgeting out time for each phase of our tiered campaign.
After completing so much work, it feels surreal that our campaign's launch date is now within sight. Keep your eyes peeled for the marketing video, which will be coming shortly.
Jill here! It's clear that virtual reality opens up a host of new intersections between humanities and technological studies. I am neither the first nor 'loudest' scholar to posit this; from the technology's inception, researchers hypothesized that, like the computer, the VR headset would revolutionize human-tech interaction.
The effects of virtual reality technology--even so early since its inception--are wide-ranging. For those with physical disabilities, the headset can become a way to "walk," "run," "surf," and a myriad of other activities that are unsustainable in the material world. These apps and others have given patients with limited mobility "goosebumps," to quote Danny Kurtzman, who has muscular dystrophy (NPR).
Interestingly, VR tech also becomes valuable for its ability to place people within disability. For example, Stanford's Virtual Human Interaction Lab recently published work suggesting that "able-bodied users... who experienced color blindness in a virtual world were more likely to voluntarily assist people with color blindness in real life. He has also created a world where users have more limbs than they would in real life. They might have three arms and need to adapt to this new body" (NPR).
Virtual reality, then, shifts the relationship between mind and body, forcing users to occupy potentially alienating spaces and adapt to them. Having played with Google Cardboard software all week, I can attest to the startling feeling of not being able to "stand" in a photo sphere created at "sitting" height (or vice versa--physically standing in a "sitting" bubble, which will make you feel very, very short). Similarly, headtracking software works so well--fools the brain so thoroughly--that a simple lag in the loading speed can induce nausea in users. At this level, one can only imagine this software's future capabilities to induce physical and mental change.
My question: while VR technology has clear application with certain disabilities, most applications aren't intended for a blind/deaf population. Google Cardboard, for example, provides a sense of immersion that is based wholly on sight. How does this technology need to change to adapt to a blind/deaf user?
More tangibly, how would a blind/deaf user access our completed VR tour? Does hard/software exist to add verbal captioning to the laboratory tour? And finally, can we afford to invest in such materials?
Big ideas, big questions, and a whole lot of research to come.
[Photo credit: Dale Garrett of Columbia, Mo., a 96-year-old World War II Veteran, Experiences a Trip to a War Memorial through Virtual Reality. 2015. Colorado Public Radio (NPR). Colorado Public Radio News. Web. 5 Feb. 2016.]
In VR + Disability: Untread Ground, I began researching blind/deaf utility within the world of virtual reality, and the results are fascinating. (Warning: This may become an entire project in and of itself.)
So how do blind users experience virtual reality? Through holophonic sound and binaural recordings, of course! If those aren't household terms for you, don't worry--they aren't for me either. Holophonics, a channel system for stereophonic sound, relies on a two-tone approach to fool the human ear. Holophonic or binaural sound is commonly referred to as an "audible hologram," an illusion which aurally manipulates its listener. A hologram tricks the eye, forcing the brain to fill in visual blanks: similarly, binaural sound devices let the faulty human sensory component do most of the work.
This is where it gets really fascinating. When the brain receives two slightly different tones from two separate "sources"--in this case, the two headphones--it hallucinates a nonexistent middle ground. For example, if the left ear gets a tone at 400 Hz and the right ear gets 410 Hz, your brain will interpret the two tones, and in addition, a third tone at 10 Hz. Put more plainly, the sounds entering your ear are different than the sounds your brain actually "hears." This results in the three-dimensional, fluctuating sound effect. This kind of stuff blows my mind. (As a side note, the auditory illusion can't be sustained without headphones. I'm thinking about the necessity of an intermediate technical device in virtual reality applications--a headset for visual space, headphones for audio--what's coming for the other senses?)
The next question is how to incorporate this technology in our MAL tour. With any foray into virtual reality, especially one focused on accessibility, an aural component must be acknowledged. The laboratory has a remarkable soundscape of music, talking, typing, whirring, buzzing, and beeping. Sound, in large part, is where the laboratory's magic lies, and we're determined to capture it.
Naturally, a binaural microphone is out of our budget. The good news is that there are plenty of tutorials online, most of which require $50 or less (one 'recipe' was only $5). The bad news is that virtually all of these DIY instructions require basic electrical engineering knowledge and soldering equipment, neither of which I currently have. Fortunately, as a university student and employee, I have incredible resources within arm's reach. Just next week, CU is offering a free, two-hour "Intro to Soldering" course, which I will be attending. Viva la Education!
While we wait on soldering equipment (and expertise!) to arrive, take a listen to this 3D virtual haircut! (REMINDER: the 3D function only works with headphones.)
In Filming Day: Marketing Edition, the team gathered to wrap up filming the laboratory's marketing video. Today, filmmaker and resident scholar Michael Piel sat down with me to review the marketing footage, which has been edited down to around 6 minutes total. Mike did a great job editing out verbal trip-ups, so the remainder represents coherent, on-topic discussion of the lab and our project. His question: with only 90 seconds, what should we keep? What points should be emphasized? What points should be left for campaign update videos? How much talking should there be? How much B-roll should we use?
We also decided to return to the lab for one more filming session, if only to gather the MAL's various sounds. We want the tone of the video to capture the quirky and wonderful workings of the laboratory, not just corporatize its goals.
We'll work on selecting final shots for the video, but in the meantime, feel check out the attached video for an (out of order) preview.
Jill here! In CU Crowdfunding, we discussed the potentially disastrous process of corporate reimbursement within the university. I'm here to share the ins and outs of this harrowing ordeal, which is still very much in progress.
First thing's first: before we can launch our campaign, we need to create a CU financial account to accept donations. Creating such an account from scratch is unfortunately quite a laborious process. Luckily, while chatting with English Department manager Vicky Romano, she confirmed that Dr. Emerson already has a financial account set-up to accept laboratory donations. (Imagine two very excited Master's students doing fist-pumps and cartwheels. This saves us weeks of paperwork.)
In my last meeting with CUF, Mia explained that utilizing Dr. Emerson's account would have one additional benefit: if we surpass our fundraising goal, Dr. Emerson can still take the money. It may surprise you to learn that CU Crowdfunding won't allow projects to accept more money than the original funding goal (unlike Kickstarter, which does not have a maximum cut-off). Because Dr. Emerson's account exists outside of CU Crowdfunding, our ability to accept "icing funds" is wholly contingent upon using a faculty financial account.
The bad news? Dr. Emerson is still out of the country, and without her explicit written permission, we simply can't access the account. I should clarify that there aren't any real complaints here: university process is slow, but secure, and we're more than happy to play by the rules. The real issue is our "hard" launch date, which now somewhat depends on communication with a traveling professor on sabbatical. I met with Dr. Emerson a mere two weeks ago, but during that time, the ball has slowly but surely made it way back into her court.
Conclusion? Collaboration is both thrilling and frustrating. It's fascinating to look back at the project's beginning and trace the web of people making it happen.
Erin and I will be giving a talk at CU-Boulder titled A Process of Making tracing the various challenges of building a project within university infrastructure. If you live in Boulder, CO, swing by for a sandwich and a quick virtual reality tour at 12:00pm, April 22, 2016 in the Dilts Lounge.
Supportive as ever, Dr. Emerson has offered to host a VR-related tab on the MAL's website! In anticipation of an outside audience, we drafted a blurb about virtual reality, Google Cardboard, and its potential functions in the laboratory. The new tab is still under construction, but if you're curious, check out a quick preview below:
Virtual Reality at the MAL
What is Google Cardboard? Google Cardboard is an open-source virtual reality (VR) and augmented reality (AR) operating system. In opposition to more expensive platforms like Facebook’s Oculus Rift, Google Cardboard focuses on bringing virtual reality applications to every household at minimal cost. In fact, Google recently worked with the New York Times to send free Cardboard viewers to every subscriber. (If you aren’t a subscriber, don’t worry—cardboard headsets cost less than a movie ticket.)
How does Google Cardboard work? Google Cardboard relies on head-tracking software, which allows smart phones to determine a given user’s head movement through slight gravitational shifts. Paired with the Cardboard viewer, which holds the phone an ideal distance from a pair of 40mm focal distance lenses, modern head-tracking technology fools the brain’s visual cortex, resulting in a convincing 3D visual illusion.
What equipment is required?
Smart phone (iOS/Android)
Cardboard viewer (purchase here)
Google Street View phone app (download here)
What’s Google Cardboard doing in the MAL? Because Google Cardboard is open-sourced, it offers users an inexpensive and relatively high-quality virtual reality experience. Utilizing this technology, we hope to create a virtual Cardboard tour that captures the magic of the Media Archaeology Lab for those who don’t reside in Boulder, CO.
Click here to check out the MAL’s phonograph room. (Note: if you don't have a VR headset, just use your desktop!) Enjoy this close up of the laboratory’s ‘phonograph’ room, which features film equipment, PhD student Jaime Kirtz’s scanner archive project (read more about this incredible scholarship here), and of course, the Edison phonograph.
The video is finally done! Michael finished editing, Mia added the appropriate CU-Boulder logos/calls to action, and the finished version has been uploaded to the University of Colorado YouTube channel. Take a quick look, and look for our campaign launch early next week.
Dr. Emerson, founder of the Media Archaeology Lab, explains the importance of making such a space accessible to online communities.
Today I recorded a myriad of sounds from the laboratory. Take a quick listen!
Our CU Crowdfunding campaign is live! What a lovely, lovely day! Check out our live campaign page here.
An incredible amount of work went into this campaign page, which I'll reproduce in part below:
Dr. Michael Gamer, a book history scholar at the University of Pennsylvania, recently delivered a lecture on the CU-Boulder campus. Following his lecture, Dr. Gamer specifically requested a tour of CU-Boulder’s Media Archaeology Lab (MAL) — a site of knowledge production deeply linked to his own academic field. Finally getting to experience the "magic" of the laboratory in person by playing with old computers, early word processors, and typewriters — even an Edison phonograph from the 1920s! — his overt fascination quickly turned to disappointment when he realized that his students, an incredible 1,700 miles across the country, would likely never see the lab we’ve created. In fact, he lamented, the current website doesn’t capture the spirit of the laboratory — its static images convey facts, but not feeling.
We as CU-Boulder Department of English students took this feedback to heart, and now we’re doing something about it!
From 2D to 3D…
What if we could improve the ways in which the Media Archeology Lab is captured online? We are Erin Cousins and Jillian Gilmer, Master’s candidates at CU-Boulder, and we want to build a virtual tour of CU-Boulder’s Media Archaeology Lab. Utilizing professional software, we believe we can capture the sounds, physical feeling, and most importantly, the interactivity of the laboratory machines — from the Vectrex game station to the Apple IIe!
Help our dream become [virtual] reality!
In order for us to provide full access to the great tools and resources housed in CU-Boulder’s Media Archaeology Lab (MAL), we are raising funds to purchase the software necessary to create aninteractive virtual tour. It’s one thing to simply create a walk-through of the space, but we envision this tour to not only allow everyone to play with – and study – the equipment remotely, but will guarantee access to the MAL resources for those that cannot visit in-person.
Why are we undertaking this project?
The MAL’s website has yet to fully encompass the playfulness of the laboratory space. Every machine is functional, allowing students to tinker with tech ranging from computer interfaces to phonographs–but in contrast, the website is limited to 2-D information and images. Master’s students Erin Cousins and Jillian Gilmer, inspired by the construction of knowledge via unguided play, hope to create a virtual reality tour of this space that imitates the freedom and creative openness of the Media Archaeology Lab itself. Utilizing fisheye panoramas, binaural microphones, and professional photo-stitching software, we will a create a playful, interactive lab space online.
Who are we?
Master’s students Jillian Gilmer (English) and Erin Cousins (Comparative Lit) began this project in a graduate Digital Humanities course taught by Dr. Lori Emerson. Inspired by the ideological divide between physiological and virtual spaces, we began building the virtual tour to promote accessibility within the academe, traverse spatial boundaries, and share the laboratory’s magic with the world. We’re particularly interested in the process of making and "doing" within university infrastructure.
Share! Donate! Spread the magic!
Please donate and share our campaign so we can make this happen. All gifts are tax-deductible! We can’t wait to show you the outcome of our project!
*Any excess funds raise will be used to provide general program support for the Media Archaeology Lab (MAL) at the University of Colorado Boulder at the discretion of the director, Dr. Lori Emerson.
**To donate via mailed check, please list Appeal Code: F0038 on checks made payable to the University of Colorado Foundation OR the CU Foundation. Mail checks to: University of Colorado Foundation, PO Box 17126, Denver, CO 80217
Here's the body of our campaign launch email. This information was sent out to (gasp) over 350 individuals this afternoon! Hopefully, this will get us a head start in the month to come.
SUBJECT: Help our dream become [virtual] reality!
Erin Cousins and I are creating something really amazing at CU-Boulder for our senior project, but it requires software that’s pretty expensive… we need your help to be successful! We’ve launched a crowdfunding campaign to raise funds to purchase the software we need, and it would be great if you would check out our campaign for more info and cool content we’ve put together.
Have you checked out CU-Boulder’s Media Archaeology Lab (MAL)? It’s an incredible space! However, if you don’t live in or around Boulder, it’s nearly impossible to experience the magic of the MAL. In order for us to provide full access to the great tools and resources housed in CU-Boulder's Media Archaeology Lab (MAL), we are raising funds to purchase the software necessary to create an interactive virtual tour. It's one thing to simply create a walk-through of the space, but we envision this tour to not only allow everyone to play with - and study - the equipment remotely, but will guarantee access to the MAL resources for those that cannot visit in-person.
There’s plenty more about the MAL on our campaign page. Please show your support for the Media Archaeology Lab by donating and sharing our campaign!
Help us spread the magic of the MAL! We can't do it without you!
Erin Cousins and Jillian Gilmer
Jill here! Yesterday I met with Kim Elzinga, the Marketing and Communications Specialist for the Department of English at CU-Boulder. Kim has graciously agreed to promote our project utilizing her numerous resources, which include the English Department website (english.colorado.edu), official English Department Facebook page, and access to ENGL alumni. (This last one is big--her marketing campaigns are typically forwarded to approximately 2,500 English Department graduates and alums! In addition, Mia Fill pointedly does not have access to such contacts, as CU-Boulder jealously guards its alumni emails. Oh, intradepartmental politics.)
Here's an email I sent to Kim this morning:
"SUBJECT: MAL Virtual Tour
Last week, we briefly discussed utilizing the ENGL marketing resources to promote the Media Archaeology Lab and virtual tour project. I'm hoping we can reutilize some of the verbiage from my previous marketing emails.
Since you're in contact with alumni more often than anyone else, I'd love your help crafting the email to target them specifically. I know you mentioned highlighting Erin and I as students--how can I better humanize the project?
The second item we discussed was putting the MAL virtual tour project on the ENGL bulletin board. See below a thumbnail image for the website:
I think the blurb I sent you for the ENGL updates email would work fine.
"In order to provide full access to the great tools and resources housed in CU-Boulder’s Media Archaeology Lab (MAL), Master's Candidates Erin Cousins and Jillian Gilmer are raising funds to purchase the software necessary to create an interactive virtual tour of space. It’s one thing to simply create a walk-through of the media lab, but we envision this tour to not only allow everyone to play with – and study – the equipment remotely, but will guarantee access to the MAL resources for those that cannot visit in-person. Want to help? Here's how."
Write with questions! Looking forward to chatting further."
What does a marketing campaign look like behind the scenes?
In my case... a very colorful excel spreadsheet. CU-Crowdfunding provided me with the basic template and allowed me to expand on a week-by-week basis, filling in information related to the project's target groups, the number of individual targets per group, and most importantly, marketing strategies specific to each donor category. I'm hoping the spreadsheet will continue to expand as the campaign continues!
While it may not look like much, this document was incredibly work-heavy, requiring intensive research, planning, and--once again--collaboration. Dr. Emerson provided me with the MAL's current contact list, Kim Elzinga will be advertising the project with ENGL alumni on our behalf, and Lauren Samblanet (the lab's event coordinator) is trying her best to get me a solid date for the upcoming artist-in-residence event. In addition, I spent hours researching local businesses and institutes related to humanist digital studies, pulling email addresses from corporation websites.
The second tab of this spreadsheet is a basic trace of people we've contacted--When did we reach out? Via what means? With what message? We'll also be tracing who has donated and in what amounts. This will not only help organize the rewards process, but prevent an embarrassing scenario: asking for funds from someone who's already given! (Check out the second picture attached to this post, which gives a brief overview of the Personal Contact sheet's organization.)
When recently discussing the marketing strategy, CU Crowdfunding Coordinator Mia Fill remarked that most project managers balk at this spreadsheet, intimidated by its attention to detail and corresponding level of work. I certainly agree that the spreadsheet can seem overwhelming, but when managing data, it's important to be thorough--especially in projects like ours, where the process of creation is prioritized and made purposefully visible. I'm hoping this template will allow our team to stay organized, stay personal, and communicate more effectively with our supporters... or in other words, save us more time than we've put into it.
Last week, @scientiffic, developer of the Build-in-Progress tool, was able to visit the Media Archaeology Lab while traveling. One of this project's founding pillars is collaboration, and I was beyond excited to meet the tool-builder responsible for BiP, which has aided immensely in creating visible documentation of intangible work.
We toured the space, which was a delightful exercise--our friend @scientiffic is a lover of technology, and I could barely convince her to move past the first few machines! (See pictured: @scientiffic with the MAL's Apple Lisa.)
This visit was productive for the project in several ways. It's one thing to use BiP as a reflective tool--it's something else entirely for the tool to evolve in response to usage. I love the Digital Humanities community for its ability to collaborate eagerly across time and space. The adage "if you build it, they will come" rings true in this supportive online community.
Here's to hoping more scholars get to see the lab's magic, whether through an in-person visit or an online tour.
Conversations overheard at the Media Archaeology Laboratory on February 24, 2016. Students are in a Intro to Media Studies course. Lab workers are graduate students. Transcript recorded in the style of Latour and Woolgar's Laboratory Life.
Student 1: So it’s the levers only?
Student 2: Woah that’s really cool.
Student 3: Like, you can tell it’s the lever.
Student 1: No, I don’t see it.
Lab Worker 1: I’m not positive—I’m pretty sure this could actually, sort of, connect to other machines and you could actually type […] so that could actually work—
Lab Worker 2: [across room] I mean, it’s not… it’s not a fax machine, but there’s definitely some aspects to it that—
Student 3: That’s really bizarre.
Student 1: So it goes down, and then […]. Is that how it works?
Lab Worker 1: I’ll look it up really quickly. Here, let me grab my—I’ll look it up later. I’ll look it up later.
Student 4: Okay, if you just want to help me fix it… I have no idea.
Lab Worker 1: [laughter] I’m doing too many things at once! [sound of mechanical cranking] It just needs to be cranked up.
Student 3: Okay. We were wondering.
Lab Worker 1: It’s just—it’s just—like, this little thing needs, it’s totally fixable.
Student 3: Ahhhh.
*sounds of cranking continue ~10secs*
Lab Worker 1: See? Now try it.
[Student 3 & 1 reach for phonograph simultaneously]
Student 3: Sorry!
Student 1: Sorry!
Lab Worker 2: (from across lab) Ohhhh, okay, so it’s an electromechanical typewriter, it sends the message to the---
Lab Worker 1: Ohh, you got it? She’s got it already. [creaking of spinning disk] that’s probably—oh, there. A lot of times you’re like, oh! That’s a man singing, who knew.
2m mark: music starts, but in slow motion [“Nau-ghty li-ttle laaaaad / act-ing might-y baaaad”)
Lab Worker 1: Like that!
Students: [muffled giggling]
Lab Worker 1: The cool thing about machines like this is that you can actually look at it and figure out how they work. Look down here. This little thing—[music stops]—it—it controls how fast the thing spins. And this, there and knob up here, so you can turn it and adjust the speed of the disc. [music speeds up] I have to put it up there like that. And then you can sort of… do this. [music speeds up, slows down, finds perfect speed] The thing is connected to this lever, which can sort of move back and forth.
[“I don’t to have to die to go to heaven / There’s a heaven on Earth I love / Where I can hear the voice of angels / Just as sweet as any angel up above”]
-end tape 1-
Student 1: Oh look, they have fraction keys.
Lab Worker 2: One interesting thing too is that, like, if you think about the differences between, like, the typewriter, and this computer, and what the keys actually do. But also in terms of ideological--
Lab Worker 3: I’ve messed with this machine so many times and never actually seen that. So handy! [student laughter]
Lab Worker 2: --I mean, you’re constrained by the […] and the way you type, right? But then, what happens, I mean you can run a piece of paper all the way through it, right? So what I’m getting at is, there’s more possibilities. But there’s also more room for error. Like, something to think about, is like the ideological background to it.
Student 1: Yeah. Okay.
[sounds of phonograph in background]
-end tape 2-
Student 1: “The light, like, keeps track, is, gives you, like, a measurement.”
Student 2: “Oh, of where you are.”
Student 1: “Yeah of where you are on the page.”
Student 2: “Oh, so that’s why you have a light on?”
Student 1: “I guess so, yeah.”
Student 2: “Okay…”
Lab Worker 2: “I guess it’s pretty useful, looking at structure, and the press, and inventory and stuff like that.”
Lab Worker 3: “I didn’t know that.”
Lab Worker 2: “Yeah, because that back part, it connects to a phone line. And—“
Lab Worker 3: “Really?? Oh my gosh.”
Lab Worker 2: “It’s from like, the early 70s.”
[from across room] Student 1: “Wait, how’d you get it working—“
Lab Worker 3: “That is awesome.”
Lab Worker 2: “Yeah.”
Student 1: “So—“
Student 2: “Oh, you can like, save the margins.”
Student 1: [pause] “Ohhhhh. So.”
[Mario NES music across room]
Student 3: “Hold on, this one is not working.”
Student 4: “I’m gonna take it out.”
Student 3: “Okeedoke.”
Student 5: “I don’t know. Um, we’re not sure, we want to take pictures of a few of them, for a pastry top we were making. So we’ll make, like, a collage picture or something like that.”
-end tape 3-
Student 1: “Cool. Alright.”
Student 2: “…what are we supposed to be doing again?”
Student 3: “I don’t know.”
Student 1: “We’re just messin’ with things. You can also take pictures of them or whatever.”
Student 2: “How does this even work?”
Student 1: “You just pull this little thing. And it turns around. Like this… no. Like this.”
Student 2: “Why does it sound so bad?!”
Student 3: “That’s awesome.”
Student 1: “It’s supposed to be the Star Spangled Banner….”
-end tape 4-
SUBJECT: Thank you for your support!
Thank you for your generous donation to our campaign. Erin and I thought that you would be interested in this artifact we have at the Media Archaeology Lab:
[image of Califone seen in blog post description]
Soon, thanks to people like you, everyone will be able to find out more about artifacts like this Califone 1925 through the Virtual Tour. As we develop the tour, we hope to incorporate even more interactive, multi-sensory components so that visitors will be able to hear this machine play music! Please continue to share our campaign with those you feel would be interested: https://campaigns.communityfunded.com/projects/malvirtualtour/virtual-reality-in-the-media-lab/
In addition, you may claim a personal tour of the Media Archaeology Lab with Dr. Lori Emerson at your convenience! Please reach out to email@example.com to set up a tour schedule. (NOTE: Due to Dr. Emerson's current sabbatical, we ask that you provide us at least two weeks notice for preferred dates/times.)
If you’d like to see more about how the tour is progressing, follow the links below to investigate the various aspects of this collaborative project:
Examine our process in the making with MIT’s “Build-in-Project” site, which we’ve used to document the project’s successes and failures. This blog will continue to be updated as the project progresses next semester: Build-in-Progress
Listen to a variety of machines from the 1920s to the 1980s: Lab Sounds
Questions? Suggestions for the future? Feel free to share. We can’t wait to see where this project takes us.
Erin and Jillian
Since the initial campaign launch email, I've updated our list of contacts, removing defunct addresses and organizing our data. Now that we have an orderly list, we can do cool things like sub-targeting (i.e. customize emails based on academic field/relation to the project)! Unfortunately, that does mean that I have to draft several different emails at once. I chose to make six copies: 1) English Department, 2) Computer Science/Information Science Department, 3) ASSETT/ATLAS, 4) CU Libraries, 5) CMCI, and 6) Off-Campus companies/institutes/departments.
Rather than inundating my blog with all six, I'll share the template I used for the English Department:
SUBJECT: Help support the Media Archaeology Lab!
We need your help! Master’s students Erin Cousins and Jillian Gilmer are building a virtual tour of CU’s Media Archaeology Lab (MAL) to reimagine the lab’s flat, 2D online presence – but we need expensive software to do it! This project will allow anyone, regardless of location, to explore the history of technology represented via the MAL’s one-of-a-kind collection. Please help support the MAL by donating or sharing our campaign: https://campaigns.communityfunded.com/projects/malvirtualtour/virtual-reality-in-the-media-lab/
This campaign is an opportunity to not only publicize the English Department in the Boulder community, but teach our students about digital pedagogy and the value of hands-on research. The Media Archaeology Lab is fully functional and hosts regular open hours during the business week (M: 9am-1pm, 3pm-7pm, T: 8am-12pm, W: 12pm-4pm, Th: 9:30am-1:30pm, F: 9am to 1pm). Please feel free to stop in individually or with a class to view what is on display, use the machines, do research, or have a class tour.
From 2D to 3D... In addition, Erin and I wanted to show you what can be captured with inexpensive software! Check out these Occipital 360 photospheres, which allow users to “stand” and rotate in the laboratory space:
The 3D effect works better with a Cardboard viewer, but also functions with a desktop computer or a laptop. Keep in mind that this photosphere will be much improved with professional software, allowing us to write out photo-stitching errors by hand and bring each machine to life. In the future, we hope that users will be able to “turn on” and “listen” to the lab’s Edison phonograph in addition to simply looking at it!
With your support, we can make these 3D photospheres interactive rather than simply immersive. Help spread the magic of the MAL!
Erin Cousins and Jillian Gilmer
Check out the phonograph room on Google Maps, where it has received over 150 views already.
"SUBJECT: The Media Archaeology Lab needs your help!
Erin Cousins and I are building something special at the Media Archaeology Lab (MAL), but we need expensive software to do it! There's still time to help us create an interactive virtual tour of the laboratory. We hope you'll join us! Check out our campaign page here: http://www.colorado.edu/crowdfunding/?cfpage=project&project_id=12105
Have you attended any events at the MAL? The lab acts as a community hub, hosting various speakers, digital artists, and academics from a myriad of departments and institutions. We recently hosted Brian Kane, a digital musician who discussed his video art and performed live on our lab's machines!
[photo of Brian Kane's talk]
Brian Kane's digital performance, November 2, 2015
Come check out the MAL's next artist-in-residence, Jamie Allen, whose work investigates technoaesthetics and critical infrastructure. His performance and lecture will be hosted at 7:00pm, May 20, 2016 at Dateline Gallery. The gallery is located at 3004 Larimer St, Denver, CO 80205, and will be displaying Allen's work from May 2nd to May 20th.
With your support, we can spread the magic of the Media Archaeology Lab to surrounding communities, both in academic and non-academic spheres! Please donate or share our campaign with others!
Erin Cousins and Jillian Gilmer
Jill here! Today we passed the halfway mark in our fundraising campaign: we're now at $587 out of our $900 goal! We are so grateful to all our campaign supporters, without whom these kinds of projects would not come into being.
I also met with Mia Fill earlier this week to touch base regarding the campaign's progress and our marketing strategies going forward. Wonderfully, a team of undergraduate film students has taken great interest in the project and have offered to create a video tour of the MAL. This makes my heart happy. The real goal of this project is to spread the lab's magic to surrounding communities... so by bringing in undergrad minds, we've already succeeded. I'll be working with this undergrad team in the coming weeks to produce this video. (Although we're giving them full creative license, I think a guiding voice might be useful--especially since they've never visited the physical space!)
In addition, Mia assured me that the campaign's momentum is progressing according to her expectations--large bouts of donors on days of direct contact followed by long waiting gaps in-between campaign emails. Our crowdfunding host, CommunityFunded, allows users to visualize this process by charting the data-set. (Check out this post's second photo, which indicates that project donations are directly linked to dates of community outreach.)
I hope to maintain this momentum! Naturally, donations tend to lag in the middle weeks, but campaign beginnings and endings are usually book-ended by larger, more frequent donations and site visitations. For instance, while only around a dozen people have donated funds, over one thousand people have visited the campaign page and read through its material (at least briefly). As Mia pointed out in our last conversation, many of these people meant to donate when they saw the campaign launch and simply forgot about it--hence the money spike on days of audience communication. And regardless of funding, the MAL is getting quite a bit of decent PR! I'm very grateful to the CUF platform for highlighting the laboratory's purpose and potential for growth within the university.
Stay tuned in the next two weeks as we wrap up our CUF campaign and (hopefully) collect funds!
Jill here! Working with the English Department's marketing director, Kim Elzinga, we sent an email out yesterday requesting aid from CU-Boulder's English alumni. This is a huge step for the campaign, exposing our project to an estimated ~3,000 alumni eyes.
I have to admit, I'm disappointed it took this long to coordinate an alumni email--our ENGL representative was gone on vacation for quite some time--but regardless of the department's slow efforts, any support is good support!
Stay tuned as we wrap up our campaign in the next 48 hours!
Whew! Jill here, and boy are we happy to say that our project has received full funding. Thanks to everyone who shared our campaign or donated! We received $1,032 in total (almost $150 over our goal) from 28 supporters.
Jill here! I recently met with Dr. Dan Szafir, a recent hire in CU-Boulder's ATLAS program. In February 2016, Dr. Szafir opened a new laboratory on campus called the Interactive Robotics and Novel Technologies (IRON) lab, which focuses on exploring human-centered principles for developing novel sensing, interactive, and robotic technologies. Dr. Szafir also taught an "Intro to Virtual Reality" course at CU last Fall, and was more than happy to share advice.
As we prepare for our talk, Erin and I are putting together visual aids and infographics. Last week, we met and decided to split the work according to what we each contributed to the project: since Erin's contribution was largely in the first semester (she spent this semester passing a thesis defense--woohoo!), she'll present on the first ~4 months of the project, which we've tentatively titled Phase 1.
I'll take over halfway through, moving into Phase 2 of the campaign, which was largely a matter of navigating university infrastructure, marketing in online communities, and planning for the inevitable Phase 3 (coming to theaters Summer 2016).
What's fun about this presentation is that it forced us to do a little math! Check out the attached infographic, which visually quantifies the project's collaborative aspects.
Working with Greg Swenson, News Editor of the CU-Boulder Today, we were able to put together a brief blurb advertising the Media Archaeology Lab, the project, and our upcoming talk. The gist of the article is below:
Master's candidates Erin Cousins and Jillian Gilmer will discuss their project to create a virtual tour of the tools and resources housed in CU-Boulder's Media Archaeology Lab on Friday, April 22, 12 to 1 p.m. in Hellums Diltz Lounge.
Founded in 2009, CU-Boulder's Media Archaeology Lab is a place for cross-disciplinary experimental research and teaching using obsolete media tools, hardware, software and platforms from the past, and is propelled equally by the need to both preserve and maintain access to historically important media of all kinds.
Using the CU-Boulder Crowdfunding platform, Cousins and Gilmer successfully raised the funds needed to move forward with their vision for the virtual tour, which is to not only allow everyone to play with - and study - the equipment remotely, but to also guarantee access to those who cannot visit in-person.
SUBJECT: Help "digitize" the Media Archaeology Lab!
There's still time to help fund a virtual tour of the Media Archaeology Lab (MAL)! Please consider donating our sharing our campaign link, which features updates from the last few weeks of fundraising.
We're so grateful to our supporters thus far and can't wait to see where this project takes us. Check out this playful animation made by a recent visitor:
Unfortunately, not everyone has the resources visit the MAL in-person. With professional photo-stitching software, we hope to transform the laboratory's static webpage into a three-dimensional, interactive space, expanding the lab's magic beyond its physical walls! We hope you'll join us.
In addition, Erin and I will be giving a talk at CU-Boulder entitled "A Process of Making: Bringing Virtual Reality to the Media Archaeology Lab" on April 22, 2016. We would love to meet you in person! Come get a "beta" Google Cardboard tour of the MAL, provide feedback, or ask questions about the project! (See attached poster for details.)
There's one week left to create a virtual "archive" of the MAL! Help make the laboratory's magic accessible for everyone!
Jillian Gilmer and Erin Cousins
This week, we wrapped up the process of sending out physical rewards, including laboratory merchandise and Google cardboard viewer kits.
Erin and I had a blast this afternoon speaking to graduate students and other members of the community about the difficulties of documenting process in a world obsessed with end-products. Thanks to everyone who came out! If you couldn't make our talk, but want to learn more about the MAL, digital humanities, or virtual reality at CU, please feel free to reach out at firstname.lastname@example.org.
With graduation a few weeks away, Erin and I will now need to wrap up Phase 3 of the project: the reshoot. Stay tuned, folks!
BiP's brilliant developer agreed to sit down with me for an interview. Check out our conversation below, titled "Demystifying Design: An Interview with @scientiffic at the MIT Media Lab."
Demystifying Design: An Interview with @scientiffic at the MIT Media Lab
Jillian Gilmer, Master’s Candidate in English at CU-Boulder, interviews @scientiffic, PhD Candidate in the MIT Media Lab’s Lifelong Kindergarten group. The interview investigates online “Do It Yourself” (DIY) communities, live documentation of process, and the aesthetics of how a project comes into being.
J: @scientiffic, thanks so much for taking the time to sit down with me. For those that aren’t familiar with you or your work, would you mind explaining your role at the MIT Media Lab in detail? Can you describe your research interests and current projects?
S: Sure! I’m @scientiffic, a 5th year PhD student here in the MIT Media Lab. The MIT Media Lab is broadly interested in how humans can use technology as a tool to empower new expression or new ways of living. Essentially, we reimagine how humans might live based on new and developing tools. And that’s pretty broadly defined, so we have groups here that work on everything from prosthetics, to how children learn, or thinking about interfaces—for example, how computer devices might reach beyond keyboard and mouse to adopt virtual reality or gesture-based interfaces, etc. I’m based in the Lifelong Kindergarten group, which is particularly inspired by how children learn in Kindergarten—where things are open, collaborative, and people can really explore with materials—and bringing that openness to people of all ages. Our group thinks about creative learning experiences that involve people actively engaging in design. That has a broad spectrum as well; my colleagues are working on programming as a tool for creative expression. There are also people interested in making in the physical world, which fosters projects combining on-screen interfaces with physical objects. My work in particular is focused on how people capture and share the physical projects that they make, helping people bring to light different design iterations, and showcasing what it means to engage with design. The ultimate goal is to help empower new audiences to participate. I think that when we’re really transparent with design, it demystifies the process—makes design a less scary and intimidating endeavor. It makes it seem less like there’s a genius designer that comes up with an idea in an instant. Our group wants to showcase the iterative creativity that goes into responding to unexpected challenges in a project. My current research projects are both rooted in this concept. I’m the main developer for Build -in- Progress, an online community built to visualize design and showcase how projects evolve throughout time (see Image 1), and the Spin, a turntable system for creating playful animations of design projects (see Image 2), thinking about documentation, and engaging in visual methods of capturing experience.
J: I’d actually like to begin by discussing Build -in- Progress since it’s a tool I’ve used in my own work. I’m interested in how that project was conceived and built. In a previous conversation, you had mentioned you almost felt like a faux toolbuilder in the beginning stages, and that you didn’t anticipate the project taking off the way it has. So how did it begin for you? What inspired you? How does something like Build -in- Progress happen?
S: Before Build -in- Progress, I had been working on different methods for people to capture what they create in ways that enable others to learn from their experiences. When I was at Stanford before I came to the [MIT] Media Lab, actually, I started getting involved in the School of Education. My jumping-off point was thinking about physical constructions and what people make with their hands. So, for example, when kids build with Legos, usually they’ll build really interesting and amazing things with physical construction kits. But when they take it apart afterward, the knowledge of what went into creating it—or even what they made in the first place—often isn’t captured. I wanted to help people capture projects in ways that help them remember what they made—assemble a portfolio of what they’ve created—and in ways that help facilitate feedback and dialogue. When I was starting at the Media Lab in 2011, I started looking at the DIY community, specifically at Instructables, which is a popular site for people to share DIY tutorials. I interviewed people that wrote Instructables to get a sense of how authors thought about what they created on the site. I also did a survey with users to delineate how people used the documentation they found to support their own making. And essentially, what I found was a discrepancy between the format of the site and how people use it. Instructables is set up so that documentation is compiled after a project is made; someone goes through a process of making, and after, they write up step-by-step instructions that would help someone else recreate it. And I discovered that users were altering the Instructables they found, substituting different materials or tools. I found that users derived a lot of joy from changing instructions they found—personalizing it so that it became meaningful to them. Unfortunately, there weren’t many ways for them to share these modifications on the platform due to the separate nature of sharing instructions and leaving comments. So that’s where my thinking around Build -in- Progress starts. I began to think about what a platform might look like that enables people to share how a project evolves and what it looks like when we actually go about creating, which is usually not a very straightforward process. There’s a lot of innovation that goes into it, a lot of setbacks, experimentation—but the tool also helps people capture what that process looks like as they’re still developing. I think there could be a really interesting dialogue between creators and users that informs how a project comes to be and how it gets shaped over time. And I think in some ways, people are less receptive to changing a design after they’re done with it. That was my general inspiration.
J: Did you have much experience designing web platforms?
S: Actually, my background is actually in mechanical engineering and hardware, so web-design was a very new space for me to enter, but I found a really amazing mentor in my group to help me anticipate those challenges. I actually have a project on Build -in- Progress that documents my personal process of learning how to make the site.
J: You’ve been discussing real-time feedback as being incredibly useful, not just for BiP users, but for yourself as a tool-builder and developer. I’m interested in how the BiP tool was maybe used in ways that you weren’t initially anticipating, and how your original design vision has transformed in response to that.
S: There have been a lot of changes! I’m of the mindset that designers should just put a project out there, see what people do with it, and use that response to inform how it evolves. A lot of changes have made to the BiP project page in particular. When I first started the site, I intended the branches to be used for separate iterations. So, you try out one version of the design in the first branch, and then you decide to change it, that gets represented in the second branch. But actually, when I first started BiP, there was a lot of project collaboration with after-school centers, focusing on really young kids capturing their process of creation as it occurred in real-time. What I ended up finding was that it can be a real trade-off between the amount of time it takes to document and the amount of time it takes to create the project. What I learned very quickly was that if you want to support people sharing multiple iterations, it needs to be in a space where they have the time to iterate. And by time, I mean a few days or weeks to develop a project. That didn’t happen much in the beginning; kids might be building something in, say, two hours, so you would get step-by-step narratives similar to Instructables.
As more audiences found the site, they started using the branching feature in different ways than I’d originally designed it. Very specifically, my thought going into it was that the feature would be used for different iterations. But for example, a lot of schools ended up using it for student group projects, so about a year and a half in we added a feature to add multiple authors to a project. Users began using the branches to show different people’s contributions. Also, the types of projects that I was documenting were typically in the more traditional DIY space. People ended up using branches to separate elements of a project that they were working on simultaneously. So for example, in a physical computing project, there could be fabrication elements. But then there are electronic elements or software elements, which get represented in different branch. Branch usage definitely evolves over time based on the audience. Additionally, I just redesigned the homepage. Many DIY sites only share finished work, presenting a static grid of images. And those don’t change very much, because when you share the project, it’s done. But at BiP, a lot of the projects are still under development, so the latest uploaded image might change daily. Now on the homepage, you can see a time-stamp of when a project was actually updated, so it doesn’t seem quite as stagnant as it might have even a week ago. And as sole developer, I can think of and enact these changes pretty quickly.
J: I’m fascinated to learn that you come from a hard sciences and hardware background, but have stumbled into what I consider to be a humanistic field of study. Do you consider the MIT Media Lab a humanistic space? Do you find it difficult to straddle multiple disciplines? And to what extent would you now consider yourself a humanist?
S: The intention of the MIT Media Lab is to think about the intersection of humans and technology. I think the humanist element is built into every group here in very different ways. Lifelong Kindergarten in particular is very interested in how people learn and use technology as a learning material, so it starts with people. We’re interested in supporting learning experiences on a shorter timeline than other groups, who might be developing a technology that couldn’t be reasonably employed in the world for another twenty years. It’s not that we aren’t future-facing, but that we’re interested in making an impact in schools and communities today. We’re very user-centered and focused on reaching out to communities. We have a lot of community partnerships, and that helps us not only understand current practices, but deploy the projects we’re working on to larger audiences. It’s a collaborative work space where interdisciplinary scholars use the facilities, but also to talk to other people, which is so important for development work. Now I’m doing a lot more writing since I’m trying to write my dissertation, but I still come into the lab every day.
J: So a typical day in the life for you, then, normally includes a stop by the lab. That’s interesting, because when we were interviewing digital scholars, physical space wasn’t always important to completion of the work.
S: The physical space is super, super important. There are two lab buildings here: half the groups are in the new building and half the groups are in the ‘classic’ building, as we call it. I’m in the new Media Lab. One thing that went into the design of the space is to try to make it as transparent as possible. There’s a lot of glass, and the labs are structured so that they’re open and shared among multiple research groups. The idea is that when you walk through the lab, you’ll get to see what other people are doing just by walking through the space—just by physically seeing what’s going on—with the hopes that it would foster collaboration.
I think most of the collaborations in the lab come from students taking classes together and working on projects. All classes are project-based (there are no exams). The lab creates space for overlap, and the physical space is definitely designed in order to accomplish that. I myself collaborated with other students early on when I was taking classes. I wish I were sitting at my actual desk (see Image 3): I literally sit right next to an electronics bench, and right behind a 3D printer, and an industrial sewing machine—which I don’t use personally—but I’m surrounded by all these materials, and that has helped me so much over the last year.
J: How much group collaboration is there in the lab space? How often are you in communication with workers in other groups, or asking them for help on a project? I know that the work is very different, as you were saying before, but I’m interested in whether those conversations are happening at the lab.
S: A lot of collaboration ends up happening really informally–between people that you took classes with, or that are within your cohort, or people that are using somewhat similar technology. Or even people working in a somewhat similar space, maybe collaborating with the same corporations or community organizations. As an example, a student in another group works on opening up electronic design to broader audiences, and she has developed a product that helps you make interactive paper creations from circuits that you can use on paper and connect. It’s somewhat related to what I’m working on, but mostly we’re just partnered with similar people.
It’s always helpful to get a different perspective and be able to brainstorm on different elements of your design. As another example, a student in my cohort and I collaborated on a construction kit project, which was my main research project in my first year. That kind of collaboration happens a lot for different students in the lab, but I would say it’s more organically grown. I wouldn’t say it’s top-down, where professors decide they’d like to collaborate on a project. It’s mostly just students. You have friends in other groups and you decided you’re both interested in something, and how fun would it be to work together? I do wish there was more collaboration between faculty. Our faculty are incredibly busy people, and to come together for a larger initiative is very difficult. It’s something that people are striving to make happen more often, though.
One example that has recently come to fruition is a laboratory-wide well-being initiative. People are interested in how we can make the Media Lab a healthier and welcoming work space for both us within the lab and audiences outside the lab. That could come in the form of tools people make or papers people publish. But one simple result is that the well-being initiative has helped people eat lunch together in the lab. So the goal is not always research focused.
J: How much of laboratory collaboration is driven by corporate interests? In our graduate Digital Humanities course this Fall , we read the Stewart Brand’sInventing the Future at M.I.T. The text was published in the mid-80s, so it’s describing a very different space than the Media Lab you’re currently inhabiting. But to quickly sum, every single page is filled with new, amazing imaginings. In a way, the book was saying: if you can dream it, MIT can build it. But in another way, I think the text terrified us. One of my colleagues, Erin Cousins, wrote that the reading Brand was like “drinking […] corporate kool-aid” or “being seduced by a supervillain,” glossing over fairly significant issues of cash-flow and corporate branding in academia.
S: Yeah, it can be very difficult to define that corporate-academic relationship. I’m sure it was different in the 80s, but I think one of the most interesting things about the Media Lab model (both a good and bad thing, probably) is that we’re a really well-funded lab generally, which gives students an incredible amount of freedom. The funding model here is set up so that companies have to pay to be members, and membership costs $200K, so really only larger companies can afford to participate. And when you give money to the lab, you actually have access to the IP—so it’s actually set up so that the students have access to their own IP. If, for instance, they want to start a company after they graduate, then they can. But so can the members, so figuring that out is an interesting challenge. At other institutions, if you get funded from a company, they own all the IP of the student. The students pay their dues, so to speak. You come up with the idea, and the company takes it because they paid for it. But that’s not the way it happens here, which is cool. When companies give you money, it goes into a consortium fund, and that fund gets distributed across different groups. And it’s funny—typically, when labs are sponsored by companies, then the companies can say, ‘Oh, we want you to develop X.’ But that doesn’t happen in the Media Lab, for the most part. Companies give money trusting that something interesting will come from it. Sometimes it’s a synergistic relationship where a company is really interested in an idea that happens to align with a certain group’s interests. LEGO, for instance, is a sponsor of the lab, but they collaborate with our group in particular. And that’s great for both of us, because they have access to scaling projects in ways that we don’t necessarily do as a research lab–but we’re also doing a lot of research that’s of interest to them on how children learn. For the most part, though, we have a lot of freedom with what we do with the money, which is so unusual, and something I’ve been really thankful for in my time here. No one has ever told me, ‘Oh, this company is now sponsoring the lab, so you have to spend a few months working on this project.’ Occasionally projects will come together based on what corporate sponsors are interested in, but if I join it, it’s because I’m interested in it myself as well. There’s never been a case where I was required to work on something because of the way things are funded. It’s a good way to brainstorm with certain companies too; I know some will post hack-a-thons and specific events to get feedback on something new they’re developing. But otherwise, people in the lab aren’t obligated to work on particular projects due to corporate sponsorship. It seems to me that the MIT Media Lab has a lot of flexibility that other institutions don’t simply because we’re now thirty years old and have built up a reputation in the field. And that freedom is incredibly valuable.
J: @scientiffic, thanks so much for your time. This discussion has been wonderfully enlightening!
 Read Erin Cousin’s blog post here: https://dhtoph.wordpress.com/2015/10/10/cousins-post-6-logo-lego-and-lifelong-kindergarten/
Howdy folks! After a healthy break of traveling to New York City, Greenfield Village, Vancouver Island, and the Denver Comic Con, I'm finally home and back at it. A quick team update: Erin and I have both graduated with our MA degrees. While I will be continuing the project post-university life, Erin will be moving into secondary education and liberal arts studies. Thanks again, Erin, for all your hard work on the MAL's behalf, and best of luck in your future!
I now have 5 weeks to focus on tying loose ends--I myself will be moving cross-country on August 1st to begin a new graduate program in Indiana. In this time, my goal for the VR project is the following:
I finally have nice camera equipment, a fisheye lens, a functioning tripod, and (whether it helps or not) a 3D modelling certificate from the University of Victoria. (For the purposes of tracing equipment acquisition, I'll admit I had to order a fisheye lens OOP--even with an incredibly diverse and supportive community, occasionally you've got to do it yourself!) I've also spent the last few weeks trying to learn how to take photos with said lens while traveling... and now feel somewhat confident in my abilities as camerawoman.
Here's this week's to-do list: 1) I'm still hoping to track down the building's blueprints, should they exist, although this is naturally proving to be a rabbit hole, 2) I'm reshooting the space this week. The photo stitching process won't take more than a weekend, though it might take *all* of said weekend.
By the end of July, I'd like to have an HD 360 degree bubble tour of the MAL. The tour will include audio elements, info on the space/machines, and full walk-through ability. I'd like this edition to highlight 3-5 machines in each room (the 'coolest' ones, if such a category exists). This will allow me to focus my time on a few machines in detail and reproduce them as accurately as I can in the virtual environment. I'm happy to make each machine interactive over time, but I think this is a reasonable five week goal, especially now that Erin has moved on. This tour will be accessible via PC and can be embedded in the website.
My long term goals are much more extensive:
-3D audio: Adding sound to the tour is out of my reach for the summer, but eventually, I want the tour to be a noisy affair. For instance, I'd like to incorporate the sound of a class experiencing the MAL for the first time as a theoretical exercise. I imagine this as a button in the tour: "click to hear candid human-machine interaction" or something similar.
-iOS/Android app: This step shouldn't actually be too difficult (she says--ha!), but again, time is the issue at hand. I'll have to chip away at a viable phone application for the MAL once I have the base bubbles established. Once we have an app, you could theoretically experience the entire VR tour with a Cardboard or Oculus headseat, moving through space/interacting with objects utilizing head-tracking software.
-General updates? I'm now thinking of how we'll be preserving this resource over time. Specifically, I'm wondering how we'll incorporate new machines, shifting tables and pianos, and other changes to the space. Archiving a text is one thing; documenting a 'living' laboratory is quite another, especially when the collection is so directly impacted by the available space at hand.
Five weeks, big goals, and one really nerdy, really enthusiastic scholar. Let's get started!
Alright, friends, we're finally at a stage where I have all the equipment, funding, PR, community support, and "expertise" I could need--that last one is a constantly expanding category. With these tools, I have to assemble a Beta 2.0 version of our MAL virtual tour. So, technically speaking, I've got this. (Buh-dum ching.)
Over the last week, I've spent a lot of afternoons at the laboratory taking photos, stitching them, un-stitching them, and playing with the Panoweaver software (read: lots of YouTube tutorials). I've learned the difference between circular, drum, and full-frame fisheye images, practiced shooting the laboratory space with a different set of tripods, and even successfully built a HQ test bubble! Woohoo! Even a small bit of success can keep me happy for days.
Now I have to decide what I want for the Beta Tour 2.0. A spherical tour? A cylindrical tour? How can I write out minuscule errors in the stitching process? What machines would I like to spotlight in my (very) limited time?
As always, these questions will require plenty of laboratory play-time. I look forward to it!
NOTE TO SELF: Bring electrical tape to the lab, make X's on the floor, and mark virtual bubble sites. This will allow for a revisit/reshoot if necessary.
I spent a good chunk of my Sunday in the lab this weekend hammering out panoramic details. I encountered several issues last week that have since been solved, but naturally, simultaneously uncovered a whole host of new problems. Such is the way of unfolding projects!
With several failed attempts under my belt, here's my *new and improved* photo-stitching process:
1) The equipment makes all the difference. I'm using a Canon EOS 70D with a 18mm fisheye lens, an approximate value of ~$1500. This is some of the cheapest hardware on the market capable of providing HQ.
2) Pick a spot and mark it with tape. Take 25-30 pictures in a slow, ascending spiral motion (for reference, each photograph should overlap about 20% with the last). Do not move your tripod for the duration of the shoot (I actually taped mine to the floor to prevent wiggling).
3) Upload the photos and edit them, paying close attention to tripod legs, blurry spots, or particularly dark areas.
4) Upload the edited .jpeg files into Panoweaver 9. THE UPLOAD ORDER MATTERS. Start with the floor, and in an ascending horizontal spiral, work up to the ceiling. If the first image in a set happens to be a bookcase, Panoweaver will botch the panorama's orientation, and you'll end up standing on that bookcase in the tour.
5) Select photo type (full-frame for me) and HoR degree (88 for full-frame photos). Hit "stitch." The program often errors at this point if the stitching parameter is off, the photos are uploaded in a strange order, or any of the photos are uploaded in a different orientation (i.e. portrait v landscape).
6) Once your panorama is successfully stitched (usually takes 10-15 minutes), edit out any stitching errors by manually plotting points. This can take a bit of time, but will correct any crooked lines in the final edition.
This is where I'm hitting a bit of a wall (or ceiling, to be precise). The program isn't doing very well with floors or ceilings, leaving gaping black holes in the 3D pano. Theoretically, the program has a floor/ceiling tool to create an image in this space; however, it looks like a throw rug tossed sloppily down on tile. I'm not a fan.
I'm hoping the solution to this problem is, once again, a little bit of manual hand-plotting. I don't mind spending an extra couple of hours on this project if it means the spherical floor component doesn't look cartoonish!
Unfortunately, it's been quite some time, and I still have not been reimbursed by the university for this project's expenditures. Turns out, even when you successfully raise funds, getting your hands on them can be a challenge!
This process has involved myself, the office manager, the College of Arts and Sciences, Dr. Emerson, the laboratory, and CU Crowdfunding. Talk about covering all your bases. No one seems to know where the money went, in what amount it was deposited, or when.
At this point, I'm thinking about this investment as a pregnancy (oh yes--we're well past the 9 month mark). And accordingly, I'll think of the reimbursement process as a long, drawn-out labor period.
Love live bureaucracy!