State of the Gadget: Kitchen

This probably won’t come as much of a surprise, but I’m a fan of gadgets. Whether it’s a consumer gadget you can buy from Best Buy, or some new piece of technology installed in a business, I’m always drawn to new toys to play with.

Over the last year, there have been several gadgets that have caught my eye, either because I’ve read about them or because I’ve used them personally. I wanted to take some time to write about my impressions, mostly because there are several themes that seem consistent across consumer electronics recently — themes that, unfortunately, don’t always help the consumer or improve the product.

In this post, I wanted to take a look at the latest kitchen gadgets from CES. One gadget in particular caught my eye, but unfortunately, not in a good way.


When thinking about successful food technology, two things come to mind: meal box delivery and restaurant delivery. Attempts to put technology directly in the kitchen have mostly failed, but it’s not for a lack of trying. Every year, companies show off their latest innovations at CES, hoping to reverse the trend. I’d like to think that will happen at some point — but it won’t be this year since, once again, there were no standout products. In fact, I’d go so far as to say that the people who are designing these devices have either never used their kitchen, or were forced into creating them despite their better judgement.

For example, GE unveiled their latest innovation: the Kitchen Hub. It’s essentially a vent hood that also has a large touch screen mounted to the front of it. My first concern with such a device is what you do about a microwave. Many kitchen layouts these days will put the microwave in a similar location — built into a vent hood over the oven — as a way to open up counter space. But if you use the Kitchen Hub, you’d have to forgo that option (and of course none of the PR photos on the website show an alternate spot for a microwave).

Expecting a cook to give up counter space is a big ask — but it isn’t out of the question as long as the device improves life in the kitchen. Trouble is, the Kitchen Hub appears to focus more on the technology inside it rather than streamlining everyday tasks. For example, in addition to the large display, there are several cameras — one front facing, and at least one mounted below so that you can show off what you’re cooking. The selling point here is that you can make video calls, presumably with family and friends who could provide cooking advice or join in on the experience. Alternatively, if you’re part of the streaming video craze, you could use this setup to improve your streams.

It sounds fancy, but I remain unconvinced that video calling while cooking justifies such a large, invasive setup in the kitchen. Even if you do make a lot of video calls, though, it doesn’t seem ideal. A lot of the cooking process doesn’t happen in front of the stove, so most of the time the person you’re chatting with either wouldn’t see you, or would only see part of you. These same concerns would also apply to streaming — which I imagine would be even more undesirable.

As if trying to justify such a large display, there is Netflix integration so that you can watch movies while cooking. Personally I watch movies when I’m sitting down and relaxing, not when I’m trying to prepare ingredients and follow a recipe — but maybe that’s just me.

Speaking of following a recipe, the Kitchen Hub has a recipe app with “over 5000 recipes”. You can even take a picture of all those hand-written recipes from grandma and add them to its database — but if you were hoping that it would transcribe the recipe to text automatically, you’re out of luck. All you can do is view the photo, so you might as well just keep using the paper copy.

The recipe app seems like it would work in a pinch, but the whole process is designed around a workflow that doesn’t make sense for the typical cook. For example, I typically don’t plan meals while I’m standing in the kitchen. But even if I did, and found a recipe to make, I’d likely need to see what ingredients I have and whether I need to do shopping. Based on the promotional video, Kitchen Hub’s solution to this is an “email ingredients” button — a surprisingly low-tech solution for such a high-end device. Once you finally start cooking, it lets you remotely set the oven temperature from the recipe (as long as you have a compatible GE oven), which I suppose is nice. But it’s not clear if it also has the ability to set or keep track of timers directly from the recipe.

When I saw a device called “Kitchen Hub,” I assumed it would excel at kitchen-related tasks. In some ways, for certain segments of the market, that is perhaps true. But for the majority of cooks, the Kitchen Hub works against the user instead of with them.

It’s hard to know what decisions were made to create the final device, but one aspect is consistent with something I see all the time: the problems being solved don’t get as much attention as the underlying technology and trends used to build the product. Admittedly, it’s an easy trap to fall in to. Software and hardware is constantly evolving, and there is an urge to keep up with all the shiny new features that come along — after all, you don’t want your expertise to grow stale or obsolete. Similarly, you want your product to look modern and innovative, so it’s tempting to use all the latest features available. In the end, these two desires collide and you end up with something like the Kitchen Hub.

The needs of users should always come first. Even if you claim to be an expert, research has to be done in order to verify assumptions and determine priorities. Only then can technology come into the picture. This process takes time, and isn’t always easy — users can be fickle and contradictory — but it’s noticeable if an attempt isn’t made at all.

State of the Gadget: Health

This probably won’t come as much of a surprise, but I’m a fan of gadgets. Whether it’s a consumer gadget you can buy from Best Buy, or some new piece of technology installed in a business, I’m always drawn to new toys to play with.

Over the last year, there have been several gadgets that have caught my eye, either because I’ve read about them or because I’ve used them personally. I wanted to take some time to write about my impressions, mostly because there are several themes that seem consistent across consumer electronics recently — themes that, unfortunately, don’t always help the consumer or improve the product.

For this first post, I want to look at health gadgets — FitBit, Apple Watch and the like. While their core idea is great (and has certainly caught on with consumers), their foundation isn’t keeping up with the latest medical research.


The transition from college to day job was tough on me for many reasons, but perhaps the biggest hurdle to get over was sitting in the same place for eight hours a day. I’m not a particularly active or sporty individual, but at college I would still walk back and forth between classes several times a day — which was at least 10-15 minutes per trip. Once I left college, I no longer had a schedule that promoted exercise. My office was too far away to walk to, and programming isn’t exactly a physical activity — and so, as a result, I gained weight.

At the time, the idea of a fitness device didn’t really exist. The closest analogue was a pedometer, which allowed you to count how many steps you had taken in a particular day. It’s not a bad metric, but it doesn’t take into account the kind of steps you’re making — is it a leisurely stroll through the park, or an intense 100 meter sprint? They encouraged fitness, but they didn’t help much with weight loss.

The companies building these devices realized this too, and before long, they introduced more sophisticated technology. Heart rate monitoring and GPS features are now common, and help answer the questions of how far and how intense.

While these improvements help, they don’t immediately translate into losing weight or becoming more fit. For example, speaking personally, the devices were happy to tell me how many calories I had burned based on the activity it measured, but that didn’t seem to translate to the scale when I weighed myself.

As research on the human body continues, it’s becoming increasingly clear that fitness is complex and multi-faceted. Weight loss isn’t a simple calories-in/calories-out equation, and, even if it was, today’s devices don’t do a good job of counting them. In fact, they’re really only good at tracking one thing: cardio fitness. Due to steadily advancing heart rate monitoring technology, they can not only tell you your heart rate, but also provide an estimate of VO2 Max and sinus rhythm.

But when it comes to weight and nutrition, these same devices fail miserably. There are several reasons for this; but, at a high level, it all circles back to the concept of calories. Determining the calorie content of food is challenging at best — even if you get a reasonable estimate, the amount of calories your body actually uses is different from person to person and not easily calculated. This then leads to the amount of calories you burn overall — which also differs from person to person, and is impossible to calculate based on steps and heart rate alone (not that this stops most fitness devices from attempting to do so). Finally, there is the spot where these two metrics collide: how much of the calories you consume are burned by the body to keep you alive and sustain you through exercise? It probably wouldn’t surprise you at this point to learn that there is no easy way to calculate this either. It all depends on a person’s individual metabolism and efficiency at processing food, something that can’t be determined by a device strapped to your wrist.

While I’m sure there are technology companies attempting to solve this fitness black hole, there have been no viable innovations yet. But more importantly — and more damning — is that today’s fitness devices continue the illusion that they can accurately count the number of calories you burn and, if you log the food you eat, accurately count the number of calories you consume. This results in misleading data and frustrated users who appear to be consuming less calories than they burn, but still gain weight.

There is considerable inertia when a company dominates a segment of the industry and everyone else tries to catch up. FitBit is the unquestioned leader in the space, and so they have the luxury of shaping what a fitness device should be. People also gravitate toward the simple numbers presented by their devices: step count, calories burned, etc. As a result, competitors are hesitant to stray from what is familiar — they either add questionable novelties that only compound the problem of using steps and calories, or they innovate on the external look of the device.

None of this is good enough. It all amounts to excuses — reasons to not stray from the status quo because it could fail. To be sure, these aren’t trivial issues to overcome; but misleading users and the status of their health is not something to be taken lightly. Technology is a large part of our lives. Now more than ever, we need to question the information we receive from them to ensure it makes our lives better instead of reinforcing a false sense of security.

Interactive Narrative, Part 2

Pushing the Boundaries

In the first part of this series, I wanted to lay the groundwork for a discussion of narrative in video games. Before we move on to a more technical discussion, I thought it would be helpful to look at some recent games that have pushed the envelope of interactive narratives.

Ready?

Night in the Woods

It seems like the first thing that hooks a player to this game is the unique art style and music, which are both exemplary. But the element that keeps you playing is the writing — clever, witty and believable, the writing draws characters that are as unique as the art style, and puts them in a story that is both down to Earth and unexpected.

Night in the Woods starts with the main character returning to her hometown. As you explore, she strikes up conversations with family, friends, old acquaintances and even some strangers. Most of the outcomes from those interactions are minor, but there is one overarching thread: who is your best friend? The game tracks who you spend the most time with by asking several times who you want to hang out with — and the character you spend the most time with affects how the conclusion unfolds.

Another interesting point to note about this mechanic is the careful balance between hiding the ultimate outcome of this decision while still making it clear that it is a choice holding weight. For example, as soon as you start the day, you are primed with the choice you will have to make later:

Eventually, you are prompted for your decision. Each friend outlines what will happen if you choose to spend time with them — and what you’ll be missing out on if you don’t choose them:

While this framing makes it clear that you’re making a significant decision, there are no immediate ramifications — except, of course, for allowing you to spend more time with that friend. In fact, later that evening, you return to your laptop to see that the friend you didn’t hang out with holds no ill-will against you, and even lets you know what they were up to. It’s not until the end of the game that you see how your choices impact the story.

80 Days

80 Days is arguably the best example of a narrative game available today. It is made by Inkle, who not only has another engrossing narrative game called Sorcery!, but has also released their story-building toolkit as open source. They’re serious about the genre, and it shows.

I single out 80 Days for several reasons: it’s replayable, it’s engrossing, and every choice has meaning.

Replayable because the path you take around the world is entirely up to you. That’s the “game” portion of this narrative game — you are given a globe dotted with cities, and you have to find your way from one city to the next, with the ultimate goal of making it completely around the world. Your first trip may take you through Europe and India, whereas your second trip may instead go north through Scandinavia and Russia. While some journeys may repeat destinations, the majority of each playthrough will be unique — and even when it does repeat, there are enough choices embedded along the way that you can still have a completely new experience.

Engrossing because the quality of writing is stellar. You take on the role of Passepartout, the manservant to one Phileas Fogg of London, who recently placed a wager that he could circumnavigate the globe in 80 days. It is your job to not only keep your master happy and comfortable, but also find the fastest route around the world without going bankrupt. Every city comes alive with its own atmosphere and culture. The people you talk to add realism and context as they express their wonder or fear. Halfway through a journey, I will often find myself simultaneously rushing to meet the 80 day deadline while also wanting the game to continue so that I can learn more about the city, country and world that the story presents.

Every choice has meaning because the prompts you are given throughout the narrative have immediate feedback — whether it’s as minor as losing a few pounds to a pickpocket or as major as being thrown in jail. The game also remembers key characters that you’ve met, or important knowledge that you’ve gained, so that you can leverage certain advantages later in the story. While some parts of the story are repetitive, the game uses some smart randomization and insight into previous playthroughs to make each game slightly more interesting than the last.

All that said, the game isn’t perfect. There is an inventory mechanic in the game that ostensibly lets you take advantage of global supply and demand to turn a profit and keep your journey going without dipping into savings. However, its implementation is clunky and shoved off to the side in a way that makes it difficult to fully leverage. As a new player, it can be especially difficult to keep track of which items can be sold in what locations for a tidy profit — or, more to the point, how to get to those locations.

You will also occasionally be told that your relationship is getting stronger or weaker with your master. It’s unclear what this means or how it affects the game — and perhaps that is intentional. But if you take the effort to tell the player this information, the player expects to see some impact from that information in the game.

But overall, these are minor quibbles — this is an experience that sits at the pinnacle of narrative-based games.

Life is Strange

While most “choose your own adventure” stories lock you into whatever choice you make, Life is Strange asks, “what if you could change your mind?”

Life is Strange follows the character of Max as she starts school. Over the course of several chapters, she meets new characters as well as refamiliarizing herself with characters from her past. But in the course of doing so, she also discovers a strange power: the ability to rewind time. For every choice that she makes, she has the ability to go back and make a different choice.

Thinking about the story in a game such as this, the ability to rewind time forces the writer to make sure every choice has a noticeable and tangible impact. Life is Strange delivers on that requirement, although it does take an interesting approach on several key parts of the narrative. Certain choices are given a high importance — i.e., they branch the plot in some significant way — but the immediate impact of those choices have no obvious “good” or “bad” outcome. Instead, they have a mixture of both, forcing the player to decide which blend of “good” and “bad” adds up to an outcome they are happy with. And, while all of the choices in the game have a point where you can no longer rewind, these key plot beats have a hard stop that is clearly called out to the player, making it impossible to know what the long term effect of a choice is.

Ultimately, the game becomes an exploration of what choice means — how it affects our lives, and how it affects those around us. For a developer of a game with an interactive narrative, this mindset is important to consider, even if rewinding time isn’t a gameplay mechanic — because a player can still “rewind time” through save files or starting a new game.

Something else to consider: what does it mean to give certain choices more weight when presented to the player? In our day to day lives, we are never given an opportunity to see which choices will affect our lives so prominently — so why should it presented that way in a game? Does it make sense, especially for a game that has a rollback mechanic? Life is Strange cuts to the heart of many interactive narratives, and is important to play for that reason alone.

The Elder Scrolls: Skyrim

Skyrim is actually here as a counter-example. While I love this game from an exploration standpoint, it is lacking deeply in narrative. There are certainly opportunities for interaction and even choice — but most of them are shallow, causing no long term effects to you, other characters or the world. Worst of all, various incidental characters will repeat dialogue, often with the same voice actor.

This disconnect between potential and reality is unfortunate — but it is also inspiring. Here we have a game that is dripping with history and character, just waiting to be tapped. What if the world of Skyrim had an intricate narrative that matched its grand scope?

Up Next

With this context in mind, the next part will explore some ideas for adding another dimension to interactive narratives, and how they might be implemented.

Interactive Narrative, Part 1


Introduction

In this series of posts, I want to take a slight deviation from technology experimentation to delve into a topic I’ve been thinking about a lot recently.

I don’t play video games too often; but, when I do, I usually gravitate toward games that have a strong narrative with interesting characters. (Incidentally, this is also the type of movie I like, which I’m sure isn’t a coincidence.) The problem is that there aren’t many games that fall into this camp. Traditionally, game play and mechanics come first, leading to an insipid story — if there even is one. In some cases, this approach is perfectly fine. That’s especially true for mobile games where you want to allow a player to jump in and out of a game session quickly. Mobile games also can’t assume they have a player’s undivided attention.

In games where the developer does make an attempt at a story, there are a few questions that need to be answered. What is the balance between story and gameplay? Is the story just a frame that justifies the gameplay, or is there a message being conveyed? What affect does the story have on the gameplay? Do you allow a player to interact with the environment while story or character development is taking place, or is story relegated to “cut scenes” only?

But to me, the most important question is: does the player get to interact with the story?

Video games allow for story interaction in a way that has been impossible to explore in the past. Movies and books are predominantly a passive experience, and while there has been some experimentation with interactivity in these formats, they usually prove to be too expensive or unwieldy to pursue in an interesting way. Perhaps the best example of this is the “Choose Your Own Adventure” series of books. By telling the reader to jump to certain parts of the book, the author allows the reader to decide where the story goes next. But a book can only contain a certain number of pages, and thus a certain number of ideas — ultimately, the end result is a series of choices that all funnel to the same result. The illusion of choice quickly evaporates.

Drawing from these early attempts for inspiration, video games have followed a similar framework when adding interactivity to their games. The result, therefore, is similar: your choices seem to have an immediate affect, but subsequent play throughs reveal that few of those choices actually yield a different outcome.

Thankfully, developers who care about interactive storytelling have continued to explore the advantages that technology brings to this space. For example, choices can be much less intrusive: not only is there no book to root through for a specific page, but the player doesn’t even have to realize that a choice has been made. Even more interestingly, video games have memory. While a book can make some assumptions about how they go to that spot in the narrative, those assumptions usually only have a scope of one or two choices back. A video game, in comparison, can theoretically remember anything. There is a stack of choices that the author can mine for data, allowing them to subtly change the details of wherever they are in the story.

In the next part, I want to explore some examples of games that have pushed these narrative boundaries. After that, I’d like to talk about how this concept could be taken to the next step — and what that might look like to an author or developer.

Docker and Bitbucket Pipelines

There are two motivations behind this post: the first is that I wanted to learn more about Docker. The second is that Bitbucket recently released a feature called Pipelines, which allows you to add continuous integration to your projects. I already had a test suite backing ProjectNom, so the idea of automating that into a CI tool was an attractive goal.

So let’s start at the beginning: Docker.

I’d already heard enough about Docker to understand the general concept. If you’re new to Docker, they have a good introduction on their website. While my end goal was to use Docker as a container for running ProjectNom tests, I also wanted to see if I could use it as a fully-functional, isolated development environment.

ProjectNom runs on a standard LAMP stack, so the first thing I did was look for an image that could get me as close to that environment as possible. Luckily, such a thing existed in the form of linode/lamp.

With the core technology in place, I just needed to hook up the site to Apache and the database to MySQL.
Continue reading

ReactJS: State, Props and Reusable Components

So here’s a thing.

As I build more components in ReactJS, I’m starting to uncover some interesting use cases. One such case that came up recently involved reusing components — a component that can be used both on its own and as part of another component. Let me explain.

A new feature on ProjectNom involves connecting your user profile to a Twitter account. To make this a more seamless experience, I decided to build a simple React component that surfaces the current state of your Twitter account.

For example, if you haven’t added your Twitter account yet:

Connect Twitter Account Example

If you want to add your Twitter account, then we have to send you to the Twitter website to authenticate, so we display the following:

Redirect to Twitter Example

And finally, if you’ve already connected your Twitter account:

Twitter Connected Example

As you can see, this component revolves heavily around manipulating a button, so I knew that would be the primary element in the render.

But thinking more generally, the second state with the busy indicator was something that jumped out as a component that I could use elsewhere on the site. So, I decided to build a “BusyButton” component which could be used both as part of this Twitter component but also on its own whenever I needed a button to display a “busy” state.

So first, let’s make a BusyButton suitable for our Twitter component. As described in my earlier post, it’s best practice to store state in the outermost parent component. Child components just use their props. So, using that philosophy, we can build our BusyButton component like so:

module.exports = React.createClass({
	render: function() {
		var buttonStyle = "btn-" + this.props.style;
				
		if (this.props.busy)
		{
			var busyLabel = this.props.busyLabel
				? React.DOM.span({style: {paddingLeft: 10}}, this.props.busyLabel)
				: "";

			return React.DOM.button({className: "btn " + buttonStyle, style: {minWidth: 75}, disabled: "disabled"},
				React.DOM.i({className: "fa fa-circle-o-notch fa-spin"}),
				busyLabel
			);
		}
		else
		{
			var icon = this.props.icon
				? React.DOM.i({className: "fa fa-" + this.props.icon, style: {paddingRight: 10}})
				: "";

			var attributes = {className: "btn " + buttonStyle, type: this.props.type, id: this.props.id, onClick: this.props.onClick};
			
			if (this.props.disabled) {
				attributes.disabled = "disabled";
			}

			return React.DOM.button(attributes,
				icon,
				this.props.label
			);
		}
	}
});

As you can see, this component is driven entirely with props. If this is a child component, then that makes sense. If the parent gets rendered again, it will pass new props to the child, and the child’s behavior will change when it renders.

So far so good.

But now we come to our second acceptance criteria. We want this button to have the same behavior on its own: a button that can be marked as busy, and have the same disabled state and indicator icon.

With our current component, we have to set everything with props. That’s fine for the component’s initial render, but if we want to change the component’s behavior after the fact, we have a problem. Props are supposed to be immutable — an initial state and nothing more. (see: Props in getInitialState Is an Anti-Pattern)

Truth be told, we could ignore this advice and use something like componentWillReceiveProps to make the component behave like we expect. The issue I have with this approach is that it ignores the fact we are fundamentally talking about changes in the component’s state. The “active” and “busy” behaviors are two different states that the button can be in. And the button can freely move between those states as necessary.

This was the conundrum: I needed the component to run from props in order to keep state isolated in one spot, but I also needed the component to maintain state so that it could be easily manipulated when used on its own. I never found a satisfactory answer to this problem, so I welcome any suggestions or best practices for others who may have encountered this scenario.

In the meantime, I’ve done something of a compromise:

module.exports = React.createClass({
	getInitialState: function() {
		return {
			busy: this.props.initialBusy,
			disabled: this.props.initialDisabled
		};
	},

	busy: function() {
		this.setState({busy: true});
	},
	
	activate: function() {
		this.setState({busy: false});
	},
	
	disable: function() {
		this.setState({disabled: true});
	},
	
	enable: function() {
		this.setState({disabled: false});
	},

	render: function() {
		var buttonStyle = "btn-" + this.props.style;
				
		if (this.state.busy)
		{
			var busyLabel = this.props.busyLabel
				? React.DOM.span({style: {paddingLeft: 10}}, this.props.busyLabel)
				: "";

			return React.DOM.button({className: "btn " + buttonStyle, style: {minWidth: 75}, disabled: "disabled"},
				React.DOM.i({className: "fa fa-circle-o-notch fa-spin"}),
				busyLabel
			);
		}
		else
		{
			var icon = this.props.icon
				? React.DOM.i({className: "fa fa-" + this.props.icon, style: {paddingRight: 10}})
				: "";

			var attributes = {className: "btn " + buttonStyle, type: this.props.type, id: this.props.id, onClick: this.props.onClick};
			
			if (this.state.disabled) {
				attributes.disabled = "disabled";
			}

			return React.DOM.button(attributes,
				icon,
				this.props.label
			);
		}
	}
});

I ended up using state, since it conceptually made sense for the component. But rather than manipulate state directly, I’ve exposed some custom functions on the component that allow its user (whether it be a parent component or custom JavaScript) to control whether the button is in an active or busy state.

The disadvantage is that you have to use these functions. Passing down props from the parent no longer works, because this component now has state, and state takes precedence over props. Luckily, this is a basic component with only two primary states (active & busy), so it’s easy to manage.

But it’s clear that React doesn’t have good support for this scenario, and I’m not entirely happy with this solution. For a framework that is built entirely around the concept of reusable components, it isn’t very clear how reusability is supposed to work.

DRY, Browserify

In my last post, I discussed some of the biggest lessons I learned while building my first component in ReactJS. One of those lessons revolved around Browserify, and how it was the best way to leverage component reuse.

As I moved closer to production, I realized there was one particularly nasty side effect to Browserify. Take the following code for example:

var React = require("react");
var ReactDOM = require("react-dom");

// Custom React Component
var RecipeTabs = require("components/RecipeTabs.js");

function initPage()
{
	ReactDOM.render(
	  React.createElement(RecipeTabs),
	  document.getElementById('recipeTabs')
	);
}

Browserify lets us use CommonJS require syntax even though browsers don’t natively support it. One way it does this is by inlining the entire contents of your require’d JavaScript into your script. So, in the example above, our 13-line script will suddenly grow to include all of React, all of ReactDOM and all of the code for RecipeTabs.

If we blindly require React in this manner inside every JavaScript file on our site, then we end up with a bunch of duplicate inlined code. Even worse, we are forcing users to download the same code over and over. The browser only sees unique file names and sizes — it isn’t smart enough to realize that inside those files are large chunks of repeated code. What we really want is one file that contains all of our common code, and reuse that file on every page. That way, the browser only downloads the code once; every other page load uses a cached copy.

Thankfully, Browserify natively provides a solution to this issue.

The trick is to create a single file that contains all JavaScript requires used site-wide. This single bundle can then be included on all of your pages.

Browserify gives us an easy way to do this. For example, using gulp:

gulp.task('browserify-global', function() {
	return browserify()
		.require(['react','react-dom'])
		.transform(envify)
		.bundle()
		.pipe(source('exports.js'))
		.pipe(gulp.dest('./js'));
});

This creates an exports.js file which is a bundle of React and ReactDOM. We can now include this on every page of our site — the browser will download it once and then cache it for every subsequent page.

But we still have a problem. Browserify doesn’t know about exports.js when it processes the rest of the site’s JavaScript. It will go ahead and inline React and ReactDOM as usual wherever it’s require’d.

The second piece to make this work is to tell Browserify to not inline certain require’d libraries:

gulp.task('browserify-custom', function() {
    return glob('./src/**/*.js', function(err, files) {
        if(!err) {
	        var tasks = files.map(function(entry) {
	            return browserify({ entries: [entry] })
	                .external(['react','react-dom'])
	                .bundle()
	                .pipe(source(entry))
	                .pipe(gulp.dest('./js'));
	        });
		}
	});	
});

Now, whenever Browserify encounters a require for ‘react’ or ‘react-dom’, it won’t inline the script. But, as long as we include the exports.js generated in the previous step, the reference will resolve, and it will be able to execute any React or ReactDOM code.

This isn’t limited to third-party JavaScript libraries either. If you have your own code that is referenced across the entire site, then you can include it:

.require(['react','react-dom', {file: './src/pn.js', expose: 'pn'}])

The expose property specifies the name to use in your require statements. In this case, whenever I need to reference code in pn.js, I can simply require(‘pn’).

In the second step, we can now specify pn as an external library:

.external(['react','react-dom','pn'])

ReactJS: What I Learned From My First Component

Today marks an important milestone — I’m blogging about a JavaScript library!

Ever since I took a deep-dive JavaScript boot camp a few months ago, I’ve been eager to start a project that would let me hone my newly-acquired skills. To that end, I’ve been refactoring ProjectNom’s (poorly-coded) JavaScript so that it conforms to best practices.

That’s low-hanging fruit though. What I really wanted to sink my teeth into was something like AngularJS — a modern JavaScript framework that takes the language to another level, opening up entirely new ways of rendering a website (e.g. SPA).

But the problem with Angular is that it’s a full MVC framework, and an opinionated one at that. To truly get the most out of it, I’d have to build ProjectNom from scratch – which I had just finished doing for other reasons, and didn’t want to do again so soon.

At work, it has been proposed that we use ReactJS for any front-end development so that components could be shared and re-used across projects. I didn’t know anything about React except the name, so when I had some free time, I decided to research it.

I started to get really excited. Not only did the philosophy of the framework sound right up my alley, but it was also designed in such a way that it could slot easily into an existing project. After a tutorial or two, I made the decision to take it for a spin on ProjectNom. What follows are my experiences and takeaways as I moved from the perfect world of tutorials to a real world use case.

To be clear, this post is not a tutorial. There are many of those already, including on React’s own website. I include a quick introduction to the theory of React in the next section, but after that I will assume you know the basics of how React works so that we can dig a little deeper.

So what is React?

Let’s imagine for a moment that standard HTML elements — div, ul, input, etc — are like LEGO bricks. They each have a function and purpose; but, on their own, they aren’t very interesting or useful.

React gives us a way to define components. A React component is like a LEGO set – it specifies the pieces and assembly instructions to create an interesting, complex object. It’s easy to define your own component, but there are also a large number of ready-made React components available. Either way, once a component is defined, all you have to do is ask React for it, and it’s ready to use on a web page.

Of course, a static collection of HTML elements is only slightly more interesting than a single HTML element. Most web pages are driven by the data that flows through it, and that’s where React’s power is fully realized. Each React component contains a state and this state defines how the component should look or behave. You can specify an initial state when you first ask React for a component, but that state can be modified at any time in response to new data. You could think of it like a LEGO set that specifies exactly which minifigs should go where, but providing the flexibility to move them around later.

If this sounds as interesting to you as it does to me, then I encourage you to check out a tutorial or two. Feel free to come back here once you understand the basics.

Continue reading