Hello and welcome to our first iteration of…

Hello and welcome to our first iteration of a visualization engine. Y Worlds believes an engine capable of building interactive worlds is the breakthrough necessary to address complexity and deep systemic browsable knowledge. You are among a small number of people we are asking to experience and experiment with Visual Y.Version 1. The application has been developed and coded by two brave and compassionate coders – Luke Stanley and Alex Petherick-Brian with help from Jack Swanson. We all realize that with our limited time and resources, the version you receive is not yet what we imagine and need, yet it is a substantial step in the right direction. Please take some time to imagine for yourself how Visual Y can someday become our new language for complexity and meaning. Once we hear from you, we will make the engine public. While Y Worlds offers many writings about the limitations of language when it comes to systems and complexity (these are available throughout our website) here is a quick summary of just what we believe the toolset you are receiving is capable of:

a) Meaning

The challenge with systems and complexity is not finding data and plotting the data elegantly so that meaning can be extracted. The big challenge is to synthesize all that we know and do not know about a complex multivariate topic into a form that instantly translates the noise into meaningful, useful, actionable and valid signal. A branch of that challenge is to tell a resonant story that helps convey the meaning – which can include all forms of multimedia. Another branch of the challenge is to absolutely design a solution set that meets the N number of possible specific objectives of the user. If disease is the topic, for example, one will either want to know how disease works, how to prevent it, how to treat it, or how to live with it across a few possible perspectives – macro to micro, viewpoint (doctor, patient, planner, family member etc.).

b) Cognition

Humans are wired by genetic programming and by life experience to extract knowledge and meaning from the world into a format they can remember and act upon. Most of us are very well wired for visual processing (movement, pattern, change, color, form etc.) and for audio processing (tone, pattern, change etc.). We believe meaning can be best conveyed through pattern. Not lots of lines and variables in a mess across a screen. Not lots of lists and outlines. Not lots of data graphics. Not storyboard summary mind maps although we believe they are valuable and on the right track. We believe pattern can best be processed if there is a simple common language framework for pattern to operate within. Our patterns are a limited number of specific colors, the movements and dynamics of paintbrush strokes, and the connectivity of the strokes across time and space.

c) The proposed Visual Y Process:

1) First identify small ontological groupings (4-12 variables at most) that hold together as the representational constructs for a particular topic and focus. There can be many such groupings at many levels and perspectives. And there need never be just one instantiation – we encourage alternative or edited versions and let the ones that work best become widely adopted. Each ontological group is a model-set, a construct set, that identifies the highest level variables of a particular topic and perspective view. These model-sets are created by drawing cells (depicting each variable) in such a way that they are shaped uniquely according to the language (round for living, triangular for proofs etc.), they connect to each other in a manner consistent with their interrelationships and exchanges (no lines connect one to the other – they are understood to be connected if they are in the model-set). Every cell will have subcells and subcells of subcells as necessary to show the layers of detail (we do not yet have a 3D layered version to show you). Our job is to light up the cells to show for any topic and situation which cells are active and to what degree (objective 1), which cells are interacting according to intensity, direction, timeframe, cause/effect (objective 2), which cells should be characterized according to function and state (2 reds for very poor and poor, a single grey for neutral, and 2 blues for very good or good (objective 3), which cells are composed of fully validated knowledge (2 greens) or (unvalidated knowledge (2 yellows) (objective 4).

2) Imagine the system of Visual Y
You create a model by drawing it. You show order, sequence, relationship, activation, good/bad and validation. Each cell and each drawn element is data, code, and knowledge with automatic links and tags based upon how you draw it. Every source of information and knowledge contributing to every mark in the model is tracked and part of the database for instant reference (we do not have this function coded yet). Every edit of every model is recorded and documented. A curator oversees the entire dynamic system of knowledge creation and adaptation. A proof process, the actual visual documentation of the derivation of every model, is created using triangular forms and inextricably tied to what is produced based upon the proofs and validations. There can be scores of model-sets, sitting side by side. There can be zooms into the detail of each model-set, you can drill down from the macro to micro or from one perspective view to the other. Ultimately these model-sets will be integrated into a big bold beautiful world that operates like a video game.

What Visual Y is trying to offer is a simple solution to the complex problem of how to manage complexity. Our answer is to create bite-size interlinked knowledge maps that carry the meaning desired by the user through their depiction of patterns within fixed universal representational ontological model-sets. Your pre-built models become reference for complex topics, and ultimately become the high level understandings that carry high value meaning – leaving the detail and derivations just below the surface yet instantly accessible.

You can download the Visualization Engine for Mac OS here and for Windows here. You will then need to open the downloaded file and install the Engine on your computer. Once installed you can open the Engine and begin to create your own models. Our Quick Cell Concussion model will always open as a default, to clear this model click the clear button- . A more detailed guide to all of the tools and keyboard shortcuts is included below.

You can view all of the models that have been saved by anyone here – they are ordered chronologically. You can open or import each model into the Visualization Engine by clicking the appropriate link. ‘Open’ will open the selected model into the Visualization Engine and will erase any of your previous work (if there is any). ‘Import’ will load the selected model into your current model-set. You can also play the model by clicking the appropriate link. From here you can share this page with anyone or download the model as an .mp4 file by right clicking on the movie and selecting ‘Save As.’ You can also download the Engine on the bottom of any model page. We recommend using Google Chrome to access the models page.

So, try drawing an 8 cell model that represents something. Fill the cells with pattern to tell a story about how the cells work. Run the story, save it, share it and try again. And imagine this is a truly pioneering approach that as with all new paradigms may require considerable effort to keep an open and inquiring mind.

You can use this Playground- Working Group Alpha Playground to communicate freely about the Engine among yourselves and with us. You will need to register as a member of Y Worlds in order to use the playground. As a note, we have spent more time testing the Mac OS version of the Engine and less on the Windows version so please excuse any errors or bugs you come across, and please let us know when any errors or bugs occur.

For technical questions or feedback you can contact Jack Swanson: jack@yworlds.com, 612.567.0123. Or Skype us at (j.swan.) to discuss any aspect of the engine – technical or conceptual.

Thank you for your thoughtful consideration of our first step toward dynamic viz.

Here are shortcuts and suggestions for how to use Visual Y.Version 1.


Brush Tool: Draw using a live line brush, click to use.
Cell Tool: Create a unique closed cell, click to use and draw cell outline.
Pencil Tool: Draw using a plain line, click to use.
Paint Brush Tool: Draw using a live paint brush, click to use.
Text Tool: Add text to model, click to use and then click where you want the text to appear. Press Backspace/Delete to begin typing and hit Enter/Return when finished.
Mouse: Return to standard mouse function, click to use. You can use this tool to select objects, either click on the object or click on the object in the timeline to highlight it. Once an object is selected you can change it’s color by clicking on a new color, it’s position by using the Movement Tool, and copy the object by using the Copy Tool.
Movement Tool: Move a selected object. To use first click the Mouse tool and then click the object you want to move so it is highlighted. Then click the Movement Tool and move the object. When done click the Mouse tool again.
Copy: Copy selected object. To use first click the Mouse tool and then click the object you want to copy so it is highlighted. Then click the Copy Tool and click where you want the copied object to appear. When done click the Mouse tool again.
Clear Work: Click to clear all work. WARNING: You will not be asked to saved when you click this tool.
Save: Click to save your model. The model will be saved using the current Title. To edit the title press the “e” key and then press Backspace/Delete to begin typing and hit Enter/Return when finished.
Open: Click to open a previously saved model, this will erase the current model you have open.
Import: Click to import a previously saved model, this will not erase your current model.
Delete: Click to delete selected object, use the Mouse tool to highlight the object you want to delete.
Compress: Click to compress the timeline.
Toggle Camera Zoom: Click to toggle whether the Camera Zooms are on or off.
Mark Camera Zoom: Click to mark the camera zoom and position on the timeline. The Toggle Camera Zoom Tool indicates whether these zooms will be on or off when the model is played.


Z – “focus” selected timeline if timeline selected. If a non-timeline is selected, put in a new timeline and focus that.

SHIFT+Z – focus “root” timeline.

CTRL + U/J – alter timeline size

CTRL + X/C/V – cut/copy/paste

D – duplicate selected segment.

1 to 9 – quick save in slot

CTRL+ 1 to 9 – quick load from slot

SHIFT + 1 to 9 – quick import from slot

0 (zero) – toggle object browser

C – drop camera marker

T – drop text marker

E – edit text marker if selected, if nothing selected, edit title.

SPACE – toggle pause

RETURN – replay from start

DELETE – remove selected object

I – Quick load (from title)

O – Quick save (to title)

P – scroll down in the browser

CTRL + Q – compress timeline

Escape – quit

Arrows – pan camera

Plus and Minus (minus and equals) – zoom camera

SHIFT + Mouse Motion – move camera

Mousewheel – zoom

Right Click (Mouse) – play from start