Category: Quantum Cinema

  • Tesseracting: Gabriel Shalom’s Hypercubist Cinema

    After watching a pretty amazing Adobe R&D video on vimeo, I browsed through the comments and stumbled upon this:

    “There is totally a better word: Tesseracting. Because what you are developing at Adobe is a prototypical system for frameless, hypercubist cinema. I write a blog on the future of cinema and your tool fits nicely into my forecasting of the end of the celluloid-influenced paradigm of flat video frames, transforming them into hypercubes. -Gabriel Shalom”

    The Quantum Cinema blog is now in my google reader…I’m quite excited!

    Bonus link:MIT’s Center for Future Storytelling

  • Semantic Structures and Storytelling 2.0

    I found another post on the web about the impact, or influence, of technology in filmmaking, videomaking, and animating. In this case it is the ideas borrowed from the semantic web that could be used to enhance traditional narrative structures and possibly change the way people consume-create media.

    This approach lends to the portability of the character’s representation across multiple instances, types, and modes of story delivery.

    Mike Brent recently posted the link to a very interesting blog called “storyfanatic” which, if you read it, will eventually lead you to Dramatica which is story creation software. Makes me wonder…does Dramatica support the creation, editing, and exporting of metadata structures?

    The potential for innovation is very real if one were able to take a screenplay and export semantic structures and metadata as well. Screenplay files wouldn’t just be destined for printing on paper anymore. Subsequently, the support of editing and authoring software for video being able to embed that metadata similar to importing subtitles or other elements and being able to sync things up with timecode would be needed. If not in the NLE then perhaps have the metadata encapsulated in the typical MOV, AVI, FLV or other media container files.

    I’m viewing this as a platform to remix my own projects. In that sense this model would support transmedia projects or a new forms of serialization. But, the reality is that the same platform would be available to the users or creators of any content for co-creative transmedia.

    Possible workflow in the future?

    • Create content (ie. text, audio, video, animation, etc.).
    • Create metadata (automatically via the content creation stage above or manually).
    • Bring content and metadata together in editing (ie. enhance media and semantic components).
    • Author media deliverables with metadata channels embedded (visible or hidden).
    • Author or user based post editing, remixing systems arise being able to search, find, index, collate, and remix media from any “smart media” files. From mixing entire projects or specific nodes of data deep within a project,