Weave neural circuits to create primitive creatures and explore an alien world.
[ Online Demo | Join Beta ]
When I landed in New York, I was afraid. I expected endless crowds, giant rats, an arcane subway system, and to inevitably get mugged / hit by a runaway taxi. But it turns out it’s just another city filled with beautiful, busy people. The subway was super efficient and air-conditioned. And jeez, it drove home to me what a issue Portland has with homelessness; I see more street kids by walking down Hawthorne than I did my entire week-long trip there.
I was there for the Games For Change conference, a meeting about using games to do good in the world. This was preceded by a neuroscience virtual reality game jam (a “brain jam”, if you will), where neuroscientists were teamed up with VR developers to make a game in 48 hours.
Long story short, I was paired with a gold-star team, and we knocked it out of the park with this weird floating-lights-with-connections experience called Spark a Memory.
During the jam, one of my team members started talking to somebody at another table. They mentioned something about neuroscience and my ears perked up, but I kept working. Then: “…there’s a neuroscience game I backed on Kickstarter recently. It was called… Crescent Loom?”
What are the chances? It’s the first time I’ve run into a CL backer in the wild. It was awesome and made my day.
Afterwards, everybody voted for their top three games, and Spark a Memory was selected to be presented at the main conference!* At which, of course, I took the opportunity to shamelessly plug Crescent Loom. So that was fun!
At the conference, there was a lot of focus on where games “live” in the world — there’s more to life than Steam and the App Store! These days, games are being used in all sorts of places: classrooms, museums, historical forts, Planned Parenthood, and citizen science projects (mapping the brain, inventing medicine, and discovering protein-folding reactions).
This is exciting from a design perspective. For example, Night Shift by Schell Games is used to help train emergency medicine physicians in trauma triage. It doesn’t have to compete for attention on the App Store because people come to it in order to learn something. Instead of coming up with some kind of fancy UI solution to display patient information, it uses dense text descriptions, just like in actual medical charts that the physicians actually work with.
Making a game for a context other than the entertainment marketplace gives you the freedom to explore designs/topics that wouldn’t “sell”, and lets you rely on information/expectations that your very specific target audience will have.
A question after Night Shift‘s talk: “how do you hide the fact that you’re teaching people things in the game?”
Their response was that educational games don’t have to be “sneaky” and hide what they’re teaching. People are playing this game because they want to learn. Besides, sneaky educational games never fooled anybody.
On the coattails of all this good stuff, Crescent Loom was accepted to participate in the Seattle Indies Expo // PAX on September 3rd in Seattle!
They also helpfully included six juror’s feedback forms with the decision, every single one of which included a plea for a help file or tutorial.
So! That’s what I’ve been working on. Double-click on anything with a help file in the game, and it’ll bring up a screen with some functional information and explanatory animated gifs.
(gifs, I might add, that were non-trivial to get working:)
Another problem I’ve seen is that new players don’t know what creatures are sorta supposed to look like. Therefore, I’ve also started making a introduction where you hatch from an egg and steer a pre-made creature to an obelisk:
I also learned a lot at Games for Change about the process of getting funding for educational games. To get back on my every-other-week schedule, I’m going to post a post about that next week. It’s cool stuff, and drove home for me how different sources of funding will fundamentally shift the direction of a game’s design.
Other than that, I’m going to keep getting things ready for this SIX exhibition. Jeez, it’s been forever since I actually worked on the physics or art. It’s all been writing help files or the project as a whole.
“I’ve added most of the systems,” I said to myself. “There’s not that much left to add to the basic engine, right? Make some objectives, polish the tutorial, iron out the physics bugs. It’s been about four months since the kickstarter, and I still have about four months left. This thing is in the bag!”
Oh, Wick of two weeks ago, you sweet summer child.
Based on feedback I got from the July 4th beta and juror feedback, the more-or-less unanimous response has been: “Great concept! I have no idea what I’m supposed to be doing, or how to do it!”
Also, looking at the my big-picture Trello board, I still have a lot of promises to fulfill.
So I rolled up my sleeves, ignored all of that, and added a particle engine + basic lighting effects!
Gotta do something for fun.
Oh! Also, I’m going to the Games for Change festival in NYC at the end of July! There’s this meetup before the festival where the organizers were trying to get neuroscientists to meet game designers to make educational games — my exact cup of tea basically made-to-order.
WHAT’S MORE, they’re gonna be giving me free admission to the festival and I’m going to be able to crash with friends in the city, so the whole thing isn’t going to break my bank account. So that’s nice.
I’ve been struggling with one basic thing in the brain: how to move cells versus drag connections. The simplest answer was to add in some kind of move / connect toolbar, but I wanted to see if I could avoid adding more menu buttons.
My first attempt: double-click to pick something up, click+drag to connect.
It sounds simple, and if you know what you’re doing, it’s very efficient. However, in testing, it was one of the most consistent stumbling blocks for people. They’d pick up a cell when they meant to connect it, and accidentally make a whole bunch of connections when they wanted to move it.
I tried a few more variations on this (e.g. left-click-and-drag to connect, right-click-and-drag to move) but people still had a hard time remembering how to do what.
So I gave in and added a toolbar.
The lesson I’m taking away from this is that, as far as accessible design is concerned, invisible functionality is not functionality. It doesn’t matter how efficient or elegant something is in the abstract. Your audience will be frustrated if things don’t work they, with all their cultural baggage and expectations, can’t figure it out.
Since one of Crescent Loom’s design pillars is accessibility, I’m making the sacrifice of something that’s slightly more efficient in favor of something people can actually use.
Another not-tutorial thing I spent a day or two working on was adding some basic tweening to the camera. Tweening is a term borrowed from animation that refers to the “in-between” frames when something moves from one position to another.
Here’s what it looked like to open up the brain in the old version:
And here’s what that looks like tweened:
Tweening is one of those magical secret-sauce-game-design-techniques that isn’t very hard to do, but makes things feel super A+ polished. If you’re into this sort of thing, there’s a foundational talk that spills the rest of game designer’s secrets here.
Flashiest change this week: creatures laying and hatching from eggs! I added in a super-basic calorie-counting system that causes creatures to lay an egg if they get enough to eat:
Took a day or two to iron out the kinks. For example, I set eggs as a very calorie-rich meal (makes sense). So calorie-rich, in fact, that a creature got more calories by eating an egg than it took to lay one. In my infinite wisdom, I also neglected to prevent creatures from eating their own eggs.
I think you can see where this is going:
I also had to track down why creatures were hatching with developmental deformities:
It had something to do with me randomizing the angle of the creature when it hatched. Instead of diving into where in my many sins and cosins I had an error, I made it so all creatures just get born at angle = 0. Some fixes are easier than others.
There was also a bug where if an egg hatched while too close to another creature, the baby and that creature would get fused together in a big angry physics mess (I forgot to take a picture of it, sorry!). I fixed this by just making the eggs physically bigger, which ensured nothing was near their centers when they hatched.
But lo, behold life arising from inanimate matter! Babies start small and gradually grow as they eat things:
No big announcements this time around; I’ve mostly been working on polishing the user interface + adding content for the beta at the end of this month.
I finally got around to working on my list of cool body parts. I made a water jet that can be squeezed as an easy mode of propulsion (though it pulls you back a bit as it refills) and fins that can be activated to tense and turn (like holding an oar out sideways in a canoe):
In order to have a dynamic ecosystem, I need to be able to measure the performance of player creatures. Setting up a database + metrics to track creature stats was one of those things that takes a lot of backend work and has unimpressive visual results, but here ya go:
These are the average distances traveled, speed, parts that were chewed off, and calories gained from eating other animals/plants for three test creatures. Since edible plants didn’t exist yet, I had to add those too:
Those blue guys are plankton, which can act as a food source for herbivores. I still need to make a mouth that can efficiently scoop them up — I’m envisioning a whale-type creature that just swims along to gather them up. You can also see here my new ability to dynamically resize stuff, which is gonna allow creatures to eventually hatch from eggs as lil bebbez and slowly grow larger.
I also added a bit more variation to the maps; there’s now foreground and background rocks that can make little caves that creatures can hide in:
I bit off a lot to chew with this Kickstarter. I tried to limit things by breaking off the main “game” into the Explore Mode stretch goal (which we did not, unfortunately, hit) and have the core goal be just a functional creature+brain editor. But it’s a tough sell when I don’t have an easy answer for “but what do you do?”.
Even if I don’t have the design for what happens 100% nailed down, there’s gonna need to be a world for things to happen in. Up til now, I’ve been making the maps by drawing them by hand in Inkscape, which is a fairly labor-intensive process.
Enter procedural generation, a method of making maps (along with other things) by just setting up general rules and having the computer randomly put it together by following those rules. The biggest advantage to procedural generation is that once you set it up, you can make essentially unlimited unique content. The biggest disadvantage is that the vast majority of it is going to be less interesting than if you did it by hand.
But Crescent Loom’s focus is on the creatures, not the world. I just need somewhere to put them, and procedural generation excels at filling that specific need.
Long story short, importing Voronoi tessellations into Monkey X and setting up procedural generation required a lot of tedious math. Rather than subject you to that, here’s a whole buncha screenshots + commentary that I’ll share with you like they’re vacation photos:
Whoop, day late. I’m trying to stick to my every-other Monday schedule for these, but it slipped a bit this time.
Continued tweaking cell properties. It’s now possible to make chains of pacemakers inhibiting each other, which is a type of circuit you see all the time in nature:
Time is strange. It feels like the changes I make every day are small, but gradually they accumulate into something bigger than it seems like I could make. And then I realize how little time I have left, and the good feelings about that turn back to anxious work.
Last weekend, I showcased Crescent Loom at the Portland Mercury’s new tech/gaming convention: Betacon. It was cool event for me for a few reasons:
It got me to focus on getting a polished build put together, fixing a lot of the UI things that I’d been putting off (e.g. finishing the design, seamlessly switching between body and brain editing, making an info panel for the neurons)
But from the start, about half the people picking up the game for the first time would put their muscles down like this:
Which makes sense! It’s a lot more intuitive that a set of muscles would run up along an arm rather than, say, between the creature’s ankles. A better design for the game would be one where people’s natural intuition is the correct thing to do.
It just so happened that I’d also been wrestling with a different design problem; there wasn’t currently a good way to get a limb to turn in a specific direction on muscle activation; the muscles would tend to get bound-up since they didn’t wrap along with the limb:
SO! I decided to try and kill two birds with one stone and make muscle attachment points run up alongside each limb, rather than having a free-form “attach anything to anywhere” system. This has created a standardized way to place muscles that produces a predictable motion, and is far less likely to get bound up:
I’m pretty pleased with this solution, not gonna lie. Identifying problems and finding clever ways to solve them is one of the most fun processes in game dev. Of course, this raises its own problems (can you attach any hardpoint to any hardpoint?)…
Crescent Loom won the Betacon award for “Most Innovative”! I’ve never won an award for my games before, so that was PRETTY SLICK.
I participated in the March for Science, and had a brief chat with a scientist/photographer named Tyler Hulett who put together a snapshot-documentary on the march.
You can see it on Vimeo; my beautiful face is at the 5-minute mark.
Now, here’s the current to-do list on my desktop:
You’ll note the lack of, y’know, actual development on this list. I am probably one of the slowest writers I know and there is SO MUCH writing you need to do in order to manage a game project and fund thyself.
I think I need to find a writer/PR/publisher. I’m spread thin and that’s the area where I’m least-efficient. Grants especially require a specialized skillset that I simply don’t have. Problem is, even with the KS money, the project is already bare-bones budget-wise. So, dunno how that’ll turn out. Maybe I’ll just learn to write faster and care less about typeos.
Oof, it’s already been a month since the Kickstater ended? Time is flying by, it feels like I barely get anything done each individual day, but looking back I’m always surprised by how much I did.
Quick note: I’ll be showing Crescent Loom at Betacon in Portland this weekend! (Apr 15-16) If you wanna see how the game’s coming along, that’ll the place to say hi!
The biggest visual difference is that I finally added a user interface! You can now click buttons instead of having to tab through a thousand different options:
I looked at a handful of other construction games as reference points. I think it’s pretty obvious which one of these I stole the most from:
Kerbal Space Program suffers from presenting too much information (its pop-ups are a mess) and Spore suffers from being simple to the point of uninteresting (though it is cute & approachable, which is important).
I tried to strike the same balance as Besiege, and limited icons to horizontal bars on the top and bottoms of the screen. If more needs to be shown, I can do that in a pop-up when people mouseover or click the icon.
The most interesting design decision was how to incorporate the brain window. I didn’t want a separate half of the screen anymore, so I figured to go for a picture-in-picture approach.
Two options presented themselves to me:
(In either case, clicking on the brain would embiggen it to take up half the screen.)
I asked Twitter, and people were pretty enthusiastic about putting it on the creature itself, so that’s what I did. I might set the other mode as an option, because I think there are cases when you’d want to clearly see what’s happening without it taking up most of the screen. Something for Future Wick to do.
Turned out that simulating the ion channels opening and closing along each neuronal tile in a scripting language (which is more flexible than the main “engine” code, but slower) was too computationally intensive, so I simplified the ion channel scripts to only run once during setup. Here’s the ion channel that responds to a keyboard hit. Any cell with this becomes thirty times more permeable to sodium while the Q key is pressed:
Finally, there was a bunch of backend stuff I finished up doing as a consequence of the new way I’m simulating neurons. The easiest way of saving all the neurons + ion channels was actually save these ion channel scripts in the save file and running them on loading the creature.
Ugh, why do I reinvent the wheel every time I make a new user interface. TBH, most of my week was ironing out the logic of how menus and icons arrange themselves. NOTE TO FUTURE WICK: text wrapping has been solved a thousand times. If you find yourself trying to do it again, don’t. Use a library that doesn’t have all the bugs your sleep-deprived brain decided would be fun to add.
Decided that a bi-weekly update schedule’d be better to avoid spamming the email lists, which is why I skipped sending out an update last week (but I did publish a postmortem on the campaign – it took a quite unusual path!):
So what’s been happening in the last two weeks? Looking back, a suprising amount given how slow things feel on any particular day.
First off, I’ve been redesigning the creatures to make them feel more organic and less robot-like. Here’s the concept art:
And once I tweaked the drag physics (edges that are facing other parts aren’t included in the drag calculation, and thanks to Benjamin Morrison springs are now damped) and added non-rectangular pieces, I was pleased to discover that it was a *lot* easier to make things swim & steer!
So pretty. I spent way too long just cruising around as this lil guy.
I’ve expanded the scripting language implementation from last update to maps + neurons, though it’s not quite done yet. It’s a lot more work than it sounds, since I have to code each function in triplicate. Wish there was an easier way. :/
Speaking of neurons, I spend last week visiting Gabriel Barello (a computational neuroscientist math friend) in Eugene and he hammered out the solution (several, actually) for simulating actual electrical currents flowing through a neuron in real time:
This is just fantastic. It’s so so much more accurate than the janky version I had before, is prettier, and allows for a lot more nuance.
Finally, I’m almost done with my application to Stugan which, if accepted, will let me work on CL in Sweden for the summer. This would be a rad opportunity — not only is it a great focused work environment, but meeting + working alongside other devs is important if I want to keep growing this weird game/science independent career of mine.
Hi! Quick intro: My name is Wick, I’m a neuroscientist / solo indie game dev, and I just ran my second successful Kickstarter campaign. The game is called Crescent Loom; players build creatures, weave brains, and explore an underwater ecosystem.
Common wisdom says that most of the time campaigns see a big spike at the start, have a big plateau in the middle, and another spike at the end.
Crescent Loom… did not follow that pattern.
What happened? Lemme back up and give some context to what things were like right before that huge jump.