Social TV Meets On-Demand Content Consumption
Is it possible to create a social TV experience for online (non-linear) viewing?
How can convergence re-invent traditional TV?
UX researcher and designer. This was my thesis from ITP at NYU, so I did everything: research, concept development, scenarios, user testing, sketching flows and wireframes, interaction and visual design.
As TV audiences shift to watching online and traditional (linear) viewing decreases, so too does the water cooler conversation. Tweets and other inherently temporal social interaction that occur around TV tend to lose value when they don’t coincide with a live or time-based event.
Is there a way to create a shared experience when people are watching the same show, but not at the same time? How can user engagement be fostered and encouraged through design?
I began the process with market research and analysis. When this project began, Miso, GetGlue, IntoNow and other “social” TV apps did not yet exist, so there was little opportunity for competitor analysis. Those that did exist were designed for live TV viewing audiences.
In spite of that, there was a significant amount of market research that yielded valuable insights.
The most significant finding was that people were multitasking when watching TV. Specifically, people were using their phone or a laptop while watching. Multi-tasking did not appear to be a significant obstacle, rather it was a pre-existing behavior. This was significant because it indicated there was space to design an experience that existed while people were watching.
Conceptualizing and designing an experience validated by users.
I brainstormed and workshopped concepts with a project development studio to get ideas and suggestions about fostering engagement around video and see what kind of interactions resonated.
Acknowledging this project was quite experimental, I wanted user feedback as soon as possible, particularly since the question was not, who are the existing users, but are there potential users? Testing rough mockups and discussing pain points of existing social tools all contributed to project refinement.
The result of ideation sessions and user validation was a DWO (do with others) experience designed for on-demand video viewing — a microblogging social viewing application that tags and timestamps comments, quips, “snarks,” to video content. Other users see this commentary when they watch.
I mocked the experience and set up user tests as soon as possible to validate the concept.
The process revealed two main groups of user — contributors and observers, each with different priorities, goals, and needs. The former group was also split between those who wanted a more and less opinionated experience.
With only a rough concept in place and significant social complexity, designing for user engagement required understanding factors that determined participation.
I needed a way to help categorize goals based on category of user and hierarchy of their specific needs. Insights from previous testing were crucial to the creation of this chart.
Personas and Scenarios
High level personas helped focus the design and understand factors for engagement. Because user testing and validation occurred so early in the process, I was able to model personas and user journeys after actual test results.
User needs and journeys differed greatly based on type of content and age of user. Personas helped encapsulate those across content genre, age, and social graph. Scenarios were referenced to make sure design decisions were properly accounting for users.
Due to varying levels of engagement across different user archetypes, additional user testing was needed to gauge response across a variety of content while adjusting social graph filtering to match.
Simulating the end experience required multiple rounds of test user participation. Adding to the challenge, all the users had to be from the same social graph.
Feedback indicated that displaying snarks on-TV screen was preferred to showing them on a mobile device. As a result, my initial application design was for an on-TV- screen experience.
With concept and user needs defined, sketching and low-fidelity wireframing of the main interaction was the next step.
After sketching out the basic functionality and user tasks of the app, I started wireframing some of the primary interfaces. I made changes based on feedback and then started higher fidelity mockups.
Several events happened in relatively short order which caused me to rethink the technical implementation. First, the second-screen concept became mainstream. Moving the commentary off the television and onto a secondary device like a smartphone or tablet would allow as much or as little interaction/distraction as desired by the user.
Secondly, as the user testing pool became larger, there was increased feedback indicating that the on-screen interface was awkward.
Third, the initial plan was to develop the on-screen application for Boxee. Not long after this decision, Boxee announced they were discontinuing development of their standalone software.
Fourth, companies like IntoNow had introduced ACR (Audio Content Recognition) that allowed smartphones to detect and synchronize with the content being watched. This seemed to provide several benefits, the primary being that the application and interface could be on a phone instead of the tv.
As a result of the feedback and market changes, it necessitated a redesign of the application. Much of the same functionality would exist, though it would do so entirely on a mobile application.
Redesign as a mobile app started with the user flow. This helped address the changes needed for the mobile app. Sketches were initially used to help convey functionality and address scenarios.
Incorporating feedback from wireframes, I started visual treatment and design, creating high fidelity wireframes.
From the beginning, this project faced a number of challenges, both technical and conceptual. I’m pleased that through significant work, testing and research, many of the conceptual obstacles were eliminated. While certainly experimental, the results I found indicated that there is the potential for a television or movie experience that is social and participatory. Continued testing and refinement is certainly needed, but the findings are definitely encouraging.
The technical challenges were unfortunately more significant. Rather than relying on timecode, which had a high likelihood of being slightly different across platforms or formats, ACR seemed a good way to bypass that problem. Sadly the initial promise of ACR technology never came to fruition. The list of development complications grew very long based on normal human playback behavior; pausing, rewinding, etc.
While there was some hope of creating workarounds for some of the issues, Yahoo (the owner of IntoNow, the primary player in ACR tech) had problems finding practical uses for ACR as well. As a result, they shut down IntoNow.