Notes

Systems, Creativity

Page 4


More on Robots

Robots do only what they’re told. Robots can’t generate an original thought, or when they do it is because of an anomaly in their scripts.

Robots are there to output other people’s directions and ideas.

Robots don’t look back at their lives and ruminate on new ideas and new thoughts.

Robots can never answer what they’re interested in. Robots are made of circuits which were made by someone else.

Robots are focused. But have no peripheral vision*.


I borrowed this from Tinker Dabble Doodle Try, Srini Pillay M.D.

View →


The Long Tail is a Facade

The Long Tail is 3 dimensional, it is more of a long facade. Behind each of those points hides an instance of an offering, a system. Some are shallow (Peach) and some are deep (APC Surplus).

Just showing up on the tail (with an agile startup) is not enough. In fact it is wasteful (cognitive hours burnt).

Small, sustainable, branded communication connects to a fellow to human–humans and repels human–robots (those people who like the ads on the L train, or who–at work–don’t offer any more usefulness than a company’s website).

Continue reading →


Exploration vs Navigation

Referencing a comment by Shane (Parrish) in his interview with Sophie Grégoire Trudeau – the act of reading is one of self exploration. When we read we’re able to place our psyche in a new context, a new reality, without needing to build one.

There is a point in our childhood where we don’t mind to get lost, when we’re open to being alone at sea. it is the lack of familiarity that excites us.

Then we get older we set goals, we navigate, we put destinations in our GPS, & curb curiosity in favor of group think.

Getting lost is curious, vulnerable and creative. A process of exploration (going on a walk, reading a book, writing, meditating) will always result in a bigger, more liminal field of thinking.

Navigating (going where you know) is part of modern life, but it diminishes in value. We must remind ourselves to build new mental experiences, explore new modes of being, and acquire...

Continue reading →


Informational and Transformational Learning

An informational learning process fills existing buckets with more knowledge. Say math, computer science or sociology.

When learning transformationally we create new buckets. It is not so much about the learning of the material itself, but the learning that the material exists. By learning of the existence of these buckets we create mental models about new connections we didn’t have before.

Machines are of course infinitely capable of the first, and notoriously useless in the latter.

p.s. we can also think of informational learning as answers, and transformational learning as questions

View →


Delivering a Line, Not a Point

Strategy draws a line of meaning, and finds the exact right dot on it. For example: A product bridges the tension between independence and belonging, with an exact point of balance between the two.

Jennifer Garvey Berger makes the case for workplace complexity – both externally in the organization and internally in the leader’s perception of a rapidly changing world. The focus – ought to be on the hypothesis and layers of meaning.

Under that view, the line of meaning could be more valuable than then dot. If I am engaging a consultant to write me strategy, it would only be valid as long as the world stays the same.

With a world changing faster than I expect, being able to understand the field of meaning could be worth more than a single statement.

Continue reading →


The Problem with Minimalism

T03374_10.jpg

Theo Van Doesburg, Counter-Composition VI 1925

The issue with minimal design is that it is conditionally utilitarian, and reductive.

That fact, in and of itself, does not lend itself to losing to a machine designer, but it is a slippery slope down the volcano crater.

A good litmus test is thinking about what is driving our decisions.
Is it extreme efficiency? Are we relying on statistical optimization? Examples of this would be any number of sites using no styling, times new roman (or other agreeable sans serif), and a grid of images.

Or are our decisions driven by establishing a new mental model for usability? By creating a (uniquely human) delight? Something innate and not at all able to be universally defined as a rule for what good design is.

Minimal design operated on reasoning of efficiency, but without at least one single behavioral (read human) delight it might as well...

Continue reading →


Qualitative Complexity

The field of ethnography counters the anonymity of a constitute of a system. It slows down the assembly line and asks for thoughtfulness and attention. Findings from this process should then back-propagate to the logic that drives the system.

In a way we might think of ethnography as a dam slowing down the inevitable current of the standard view, and a top–down system design.

Complexity on other hand is the science of emergence, the discipline where connectivity trumps rules and agency drives emergent behavior, more so than design.

It looks at systems as evolving and adaptive concoctions, where agents are encouraged to pursue own goals whilst working collectively.

There is a field that is missing a brand - the nexus of decision science, branding, marketing for the smallest viable audience and emerging technology.

If that field is of interest, I ask you to fill this short form.

Continue reading →


Cybernetics and NP Problem

Norbert Wiener branded the field of cybernetics with his 1948 book, where he formulated the ideas of feedback and self correction. Since then a lot of this logic has been used in system thinking (especially around design and design thinking) and AI (primarily in the pursuit of thinking machine).

Yet cybernetics is flawed because of the NP problem.

The brain has unknown number of dimension (n),
A machine–modeled cybernetic feedback loop is set in its dimensions,
So all attempts to make the brain computable in a rule based, feedback and back–propagation will be futile.

Further more, the use of cybernetic thinking in consumer internet products is deterministic and too simplistic. We should do better than if-then, and think about people inner motivations and goals of our constitutes, outside the boundaries the gated (log in / log out) system we’re working on.

Continue reading →


Understanding Vaguely

When we use our intuition independently, or follow a mutual custom (say in our work place, or in cultural setting) we understand what we’re doing, but only vaguely.

We know the direction, and the boundaries of our action (do this and not that), but seldom can we draw the exact logic, enough to articulate it.

That same act of vague understanding can be particularly useful when tackling difficult and unknown topics. For example when leaning something new, or trying to unpack a scientific paper.

Understanding, like knowing, is not a binary state.

There are many gradients of abstraction we can use in order to unpack and transform our understanding.

If for example we’re reading about a new scientific breakthrough, the details of the method could be out of focus, but the outcome could be understood. If the outcome is not understood, maybe we can map the disciplines this new innovation...

Continue reading →


Machine Impossibility

Machine impossibility is impossible, this supposed paradox reveals a human condition we take for granted.

As humans we routinely change the boundaries of what we think is possible. We climb higher mountains, swim wider canals, and run longer distances. We achieve bigger feats, and inspire our fellow people.

But for a machine impossibility is absolute, the field of options is fully mapped, and fully exploited. This is one of the fallacies of trying to assign intelligence to a machine.

Our own cognitive space has crevices, and untapped corners, where inspiration can hide. The human spirit prevails, inspires and pushes us over what we think is the end of our abilities.

Next time we brand a machine as magical, we should think about the last time we did something we thought was impossible.

Continue reading →