Archives
Category Archive
for: ‘Smashing Magazine’

Cache Invalidation Strategies With Varnish Cache

Phil Karlton once said, “There are only two hard things in Computer Science: cache invalidation and naming things.” This article is about the harder of these two: cache invalidation. It’s directed at readers who already work with Varnish Cache. To learn more about it, you’ll find background information in “Speed Up Your Mobile Website With Varnish.”

varnish-cache

10 microseconds (or 250 milliseconds): That’s the difference between delivering a cache hit and delivering a cache miss. How often you get the latter will depend on the efficiency of the cache — this is known as the “hit rate.” A cache miss depends on two factors: the volume of traffic and the average time to live (TTL), which is a number indicating how long the cache is allowed to keep an object. As system administrators and developers, we can’t do much about the traffic, but we can influence the TTL.

The post Cache Invalidation Strategies With Varnish Cache appeared first on Smashing Magazine.

Continue reading

Read More

After Editorially: The Search For Alternative Collaborative Online Writing Tools

I’m going to let you in on a little secret: the best writers, be it your favorite authors or those that write for Smashing Magazine, don’t do it alone. Often, they work with an editor (or two), who will help them coalesce their words into something more compelling or easier to understand.

After Editorially: The Search For Alternative Collaborative Online Writing Tools

Having worked with several editors — and having been a technical editor myself — I’ve really come to appreciate this aspect of the writing process. Refinement is an essential aspect of any creative process. As refactoring code can make a program more logical and efficient, editing a text can allow an underlying idea to be more clearly stated, or make a piece more enjoyable to read.

The post After Editorially: The Search For Alternative Collaborative Online Writing Tools appeared first on Smashing Magazine.

Continue reading

Read More

Why You Should Get Excited About Emotional Branding


  

Globalization, low-cost technologies and saturated markets are making products and services interchangeable and barely distinguishable. As a result, today’s brands must go beyond face value and tap into consumers’ deepest subconscious emotions to win the marketplace.

The Role Of Brands Is Changing

In recent decades, the economic base has shifted from production to consumption, from needs to wants, from objective to subjective. We’re moving away from the functional and technical characteristics of the industrial era, into a time when consumers are making buying decisions based on how they feel about a company and its offer.

BusinessWeek captured the evolution of branding back in 2001:

“A strong brand acts as an ambassador when companies enter new markets or offer new products. It also shapes corporate strategy, helping to define which initiatives fit within the brand concept and which do not. That’s why the companies that once measured their worth strictly in terms of tangibles such as factories, inventory and cash have realized that a vibrant brand, with its implicit promise of quality, is an equally important asset.”

I’d take it a step further and suggest that the brand is not just an important part of the business — it is the business. As Dale Carnegie says:

“When dealing with people, let us remember we are not dealing with creatures of logic. We are dealing with creatures of emotion.”

It’s Time To Get Emotional

In a borderless world where people are increasingly doing their research and purchases online (75% of Americans admit to doing so while on the toilet), companies that don’t take their branding seriously face imminent demise.

Enter emotional branding. It’s a highly effective way to cause reaction, sentiments and moods, ultimately forming experience, connection and loyalty with a company or product on an irrational level. That’s the ironic part: Most people don’t believe they can be emotionally influenced by a brand. Why? Because that’s their rational mind at work. People make decisions emotionally and then rationalize them logically. Therefore, emotional branding affects people at a hidden, subconscious level. And that’s what makes it so incredibly powerful.

Neuroscientists have recently made great strides in understanding how the human mind works. In his book Emotional Design: Why We Love (or Hate) Everyday Things, cognitive scientist Donald Norman explains how emotions guide us:

“Emotions are inseparable from and a necessary part of cognition. Everything we do, everything we think is tinged with emotion, much of it subconscious. In turn, our emotions change the way we think, and serve as constant guides to appropriate behavior, steering us away from the bad, guiding us toward the good.”

Emotions help us to rapidly choose between good and bad and to navigate in a world filled with harsh noise and unlimited options. This concept has been reinforced by multiple studies, including ones conducted by neuroscientist Antonio Damasio, who examined people who are healthy in every way except for brain injuries that have impaired their emotional systems. Due to their lack of emotional senses, these subjects could not make basic decisions on where to live, what to eat and what products they need.


Recognize your emotions at play. Rice or potatoes? Saturday or Sunday? Say hello or smile? Gray or blue? The Rolling Stones or The Beatles? Crest or Colgate? Both choices are equally valid. It just feels good or feels right — and that’s an expression of emotion.

Emotions are a necessary part of life, affecting how you feel, how you behave and how you think. Therefore, brands that effectively engage consumers in a personal dialogue on their needs are able to evoke and influence persuasive feelings such as love, attachment and happiness.

Creativity Is Critical

What does that mean to marketers? Good ideas are increasingly vital to businesses. And that’s good news for creative professionals and agencies.

A Wall Street Journal article titled “So Long, Supply and Demand” reports:

“Creativity is overtaking capital as the principal elixir of growth. And creativity, although precious, shares few of the constraints that limit the range and availability of capital and physical goods. In this new business atmosphere, ideas are money. Ideas are, in fact, a new kind of currency altogether — one that is more powerful than money. One single idea — especially if it involves a great brand concept — can change a company’s entire future.”

As Napoleon Hill says:

“First comes thought; then organization of that thought, into ideas and plans; then transformation of those plans into reality. The beginning, as you will observe, is in your imagination.”

Emotional Branding In Action

Let’s look at some examples of branding and campaigns that go for the heart and, in some cases, hit the mark.

WestJet Christmas Miracle

WestJet Airlines pulled on heartstrings this past holiday season with a video of Santa distributing Christmas gifts to 250 unsuspecting passengers. The Canadian airline expected around 800,000 views but blew their competitors’ campaigns out of the air with more than 35 million views.


How the WestJetters helped Santa spread some Christmas magic to their guests. (Watch on Youtube)

Coca-Cola Security Cameras

While surveillance cameras are known for catching burglaries and brawls, a Coca-Cola ad released during the latest Super Bowl encourages us to look at life differently by sharing happy, moving moments captured on security cameras. You’ll witness people sneaking kisses, dancing and random acts of kindness.


All the small acts of kindness, bravery and love that take place around us, recorded by security cameras. (Watch on Youtube)

Homeless Veteran Time-Lapse Transformation

Degage Ministries, a charity that works with veterans, launched a video showing a homeless US Army veteran, Jim Wolf, getting a haircut and new clothes as part of an effort to transform his life. Degage Ministries told Webcopyplus that Wolf has completed rehab and is turning his life around, and that the video has so far raised more than $125,000, along with increased awareness of and compassion for veterans across the country.


A video of a homeless veteran named Jim, who volunteered to go through a physical transformation in September 2013. (Watch on Youtube)

Creating Emotional Connection

While neuroscientists have only recently made significant strides in understanding how we process information and make decisions, humans have been using a powerful communication tactic for thousands of years: storytelling. It’s a highly effective method to get messages to stick and to get people to care, act and buy.

The stories that truly engage and are shared across the Web are typically personal and contain some aspect of usefulness, sweetness, humor, inspiration or shock. Also, the brand has to be seen as authentic, not manufactured, or else credibility and loyalty will be damaged.

I discussed the Coca-Cola video with Kevin McLeod, founder and CEO of Yardstick Services, who suggests that most brands merely try to connect the emotions of a real moment in life to their brand.

“The Coke video is full of wonderful clips of people doing things that make us all feel good. I’m not going to lie, it got my attention and is very memorable. At the same time, I’m intelligent enough to see what Coke is doing. With the exception of the last clip, none of the “good things” in the video are related to Coca-Cola.

The ad primes us by making us feel good and then drops the brand at the end so that we connect those emotions to the Coke brand. It’s very shrewd. Part of me thinks it’s brilliant. The other part of me thinks it’s overly manipulative and beguiles a product that can’t stand on its own merits, of which caramel-colored, carbonated sugar water has few.”

McLeod puts forth sharp views about Coke merely stamping its brand on a video compilation, which could very well have been IBM, Starbucks or virtually any other company. However, while he consciously found the video to manufacture emotions, he still enjoyed it, stating that it makes us all — including him — “feel good.” So, despite McLeod’s skepticism and resistance, it still made an emotional connection with him. There’s the desired association: Coke = feeling good.


Folks make decisions emotionally and then rationalize them logically, therefore, emotional branding affects people at a hidden, subconscious level.

To get the most success in creating an emotional connection with people, stories should explore both brand mystique and brand experience, and the actual product or service should be integrated. A brilliant example is The Lego Movie, released by Warner Bros earlier this year. The Lego brand delivered a masterful story, using its products as the stars. The brand got families and kids around the globe to shovel out well over $200 million for what could be the ultimate toy commercial.

Designers, developers, copywriters and marketers in general should take a page from moviemakers, including the late writer, director and producer Sidney Lumet. He gave the following advice on making movies: “What is the movie about? What did you see? What was your intention? Ideally, if we do this well, what do you hope the audience will feel, think, sense? In what mood do you want them to leave the theater?” The same could be asked when you’re developing a brand story: What do you want the audience to feel?

Even product placement, where everything from sneakers to cars gets flashed on the screen, has evolved into “branded entertainment.” Now, products are worked into scripts, sometimes with actual roles. A well-known example is in the film Cast Away, in which Wilson, a volleyball named after the brand, serves as Tom Hanks’ personified friend and sole companion for four years on a deserted island. When Wilson gets swept away into the ocean and slowly disappears, sad music ensues, and many moviegoers shed tears over… well, a volleyball.

Making Brands Emotional

Connecting people to products and services is not an easy task. It takes careful consideration and planning. US marketing agency JB Chicago found success sparking an emotional connection for Vitalicious, its client in the pizza industry. Its VitaPizza product had fewer calories than any competitor’s, however, its message was getting lost among millions of other messages. Explains Steve Gaither, President of JB Chicago:

We needed to bring that differentiation front and center, letting the target audience, women 25-plus interested in healthy living, know they can eat the pizza they love and miss without consuming tons of calories.

A relationship concept was formed, and a campaign was soon launched with the following key messages: “You used to love pizza. And then the love affair ended. You’ve changed. And, thankfully, pizza has too! Now you and your pizza can be together again.” The agency then tested different ads, each centered on one of the following themes:

  • sweepstakes,
  • 190 calories,
  • gluten-free/natural,
  • “You and pizza. Reunited. Reunited and it tastes so good.”

The brand idea outperformed the other ads by a margin of three to one. Bringing a story into the equation resonated with the target audience.

Gaither also shared insight on a current story-building project for StudyStars, an online tutoring company whose brand wasn’t gaining traction. JB Chicago overhauled the brand and created a story to demonstrate that StudyStars is a skills-based tutoring system with a deep, fundamental approach to learning, one that ultimately delivers better outcomes.

“We needed to find and build camp at a place where skills-based tutoring intersects with the unmet needs of the buyer. We needed a powerful brand idea that enables us to claim and defend that space. And we needed to express that idea in a manner that is believable and differentiated.”

Seeking a concept that would look, feel, speak and behave differently, JB Chicago crafted the brand idea “Master the Fundamentals.” It suggests that learning is like anything else: You have to walk before you can run, or else you will fall. So, the agency is setting up a campaign, including a video, to show that students who fall behind in school due to weak learning of the fundamentals don’t just fall behind in the classroom — their struggles affect every other aspect of their lives.

Here’s a snippet of the drafted script:

Title: Pauline’s Story

We see a beautiful little girl in a classroom. Pauline. She is 8 years old. We can also see that she’s a little lost.

A quick shot of the teacher at the chalkboard, teaching simple multiplication, like 9 × 6. Back to Pauline. She’s not getting it.

We see Pauline again at age 12, again in class. She is looking at a math quiz. It’s been graded. She got a D.

There’s a sign hanging from her neck. The sign says “I never learned multiplication.”

We see Pauline again, now at 15. She is home. Her parents are screaming at each other about her poor academic performance. The sign around her neck is still there. “I never learned multiplication.”

We see a young waitress in a dreary coffee shop. It takes us a few seconds to realize that it’s Pauline, age 18. She is tallying a customer’s check.

A close shot of the check. Pauline is trying to calculate the tax. She can’t do it, so she consults a cheat sheet posted nearby. She’s still wearing the sign. “I never learned multiplication.”

She figures the tax out and brings the check over to an attractive collegiate-looking couple, who thank her and head for the door. She watches them leave.

Their life is everything hers is not. Their future is everything hers will never be. Slate (text) states StudyStars’ case, and the video ends with an invitation to visit studystars.com.

JB Chicago created a story that draws us in and links to emotions — possibly hope, fear, promise, hope, security and other feelings — according to the person’s mindset, experience, circumstance and other factors. The key is that it gets to our hearts.

Emotional Triggers

Different visitors connect to and invest in products and services for different reasons. To help you strike an emotional chord with your audience, veteran marketer Barry Feig has carved out 16 hot buttons in Hot Button Marketing: Push the Emotional Buttons That Get People to Buy:

  • Desire for control
  • I’m better than you
  • Excitement of discovery
  • Revaluing
  • Family values
  • Desire to belong
  • Fun is its own reward
  • Poverty of time
  • Desire to get the best
  • Self-achievement
  • Sex, love, romance
  • Nurturing response
  • Reinventing oneself
  • Make me smarter
  • Power, dominance and influence
  • Wish-fulfillment

How Does It Make You Feel?

As emotional aspects of brands increasingly become major drivers of choice, it would be wise for designers, content writers and other marketers to peel back customers’ deep emotional layers to identify and understand the motivations behind their behavior.

So, the next time you ask someone to review your design or content, maybe don’t ask, “What do you think?” Instead, the smarter question might be:

“How does it make you feel?”

(al, il)


© Rick Sloboda for Smashing Magazine, 2014.

Continue reading

Read More

A Guide To Validating Product Ideas With Quick And Simple Experiments


  

You probably know by now that you should speak with customers and test your idea before building a product. What you probably don’t know is that you might be making some of the most common mistakes when running your experiments.

Mistakes include testing the wrong aspect of your business, asking the wrong questions and neglecting to define a criterion for success. This article is your guide to designing quick, effective, low-cost experiments.

A Product With No Users After 180 Days

Four years ago, I had a great idea. What if there was a more visual way to learn about our world? Instead of typing search queries into a text field, what if we could share visual queries with our social network and get information about what we’re looking at? It could change the way people search! I went out, found my technical cofounder, and we started building a photo Q&A app. Being a designer, I naturally thought that the branding and user experience would make the product successful.

Six months later, after rigorous usability testing and refinement of the experience, we launched. Everyone flocked to the app and it blew up overnight. Just kidding. No one cared. It was that devastating moment after a great unveiling when all you hear are crickets chirping.

Confused and frustrated, I went back to the drawing board to determine why this was. My cofounder and I parted ways, and, left without technical expertise, I decided to step out and do some research by interviewing potential users of the app. After a few interviews, the root cause of the failed launched finally dawned on me. My beautifully designed solution did not solve a real human need. It took five days of interviews before I finally accepted this truth and slowly let go.

The good news for you is that you don’t need to go through the same pain and waste of time. I recently started working on another startup idea. This time, I followed a structured process to identify key risks and integrate customer feedback early on.

A Product With 16 Paying Customers After 24 Hours

I work with many entrepreneurs to help them build their companies, and they always ask me for feedback on the user experience of their Web and mobile apps. They express frustration with finding UX talent and want some quick advice on their products in the meantime. This happens so frequently that I decided to learn more about their difficulties and see what might solve their problems.

I specified what I was trying to learn by forming a hypothesis:

“Bootstrapped startup founders have trouble getting UX feedback because they have no reliable sources to turn to.”

To test this, I set a minimum criterion for what I would accept as validation to further explore this opportunity. I had enough confidence that this was a big problem to set a criterion as high as 6 out of 10. This means that 6 out of the 10 people I interviewed needed to indicate that this was a big enough problem in order for me to proceed.

By stating my beliefs up front, I held myself accountable to the results and reduced the influence of any retroactive biases. I knew that if these entrepreneurs already had reliable sources to turn to for UX feedback and that they were happy with them, then generating demand for an alternative solution would be much harder.

Design of my first experiment on the Experiment Board.
Design of my first experiment on the Experiment Board. (Large preview)

In three hours of interviews, I was able to validate the pain point and the need for a better alternative. (You can watch a video walkthrough of my findings.) My next step was to test the riskiest assumption related to the solution that would solve this problem. Would they pay for an online service to get feedback from UX designers?

Instead of building a functioning marketplace with designer portfolios and payment functionality, or even wireframing anything, I simply set up a landing page with a price tag in the call to action to test whether visitors were willing to pay. This is called a pitch experiment. You can use QuickMVP to set up this kind of test.

Test if customers will pay for the service.
Test if customers will pay for the service. (Large preview)

Behind the landing page, I attached a form to gather information on what they needed help with. Within a few hours, 10 people had paid for the service, asking for UX feedback on their websites. Having validated the demand, I needed to fulfill my promise by delivering the service to the people who had paid.

Did not build any functionality; just a form to collect information.
Did not build any functionality; just a form to collect information. (Large preview)

Because this market is two-sided — with entrepreneurs and designers — I tested the demand side first to see whether the solution provided enough value to elicit payment. Then, I tested the supply side to learn what kind of work designers were looking for and whether they were willing to consult virtually.

Test the second side of the market: the supply side.
Test the second side of the market: the supply side. (Large preview)

To my surprise, the UX designers I spoke with had established clientele and were very picky about new clients, let alone wanting to consult with random startups online. But they all mentioned that they had been eager to take on any work when they were first starting out and looking for clients. So, I switched my focus to UX designers who are not yet established and are open to honing their skills by giving feedback online.

Armed with these insights, I iterated on my landing page to accommodate both sides of the market and proceeded to fulfill the demands of the customers I had accumulated. No wireframes. No code. Just a landing page and two forms!

A landing page that tests both sides of the market, simultaneously.
A landing page that tests both sides of the market, simultaneously. (Large preview)

Interest in this service can also be measured by their willingness to fill out the form.
Interest in this service can also be measured by their willingness to fill out the form. (Large preview)

To simulate the back-end functionality, I emailed the requests to the UX designers, who would then respond with their feedback, which I would email back to the startup founders. Each transaction would take five minutes, and I did this over and over again with each customer until I could no longer handle the demand.

Do things that don’t scale in order to acquire your earliest customers and to identify business risks that you might overlook when implementing the technical aspects. This is called a concierge experiment. Once the manual labor has scaled to its limit, then write the code to open the bottleneck. Through this process, I was able to collect feedback on use cases, user expectations and ideas for improvements. This focused approach allowed for more informed iterations in a shorter span of time, without getting lost in wireframing much of the application up front.

Today, BetterUX.it is a service through which startup founders connect with UX designers for feedback on their websites and apps, and designers get paid for their expertise and feedback.

How To Create A Product That People Want

What did I do differently? The structured process of testing my assumptions saved me time and confirmed that each part of my work was actually creating value for end users. Below is a breakdown of the steps I took, so that you can do the same.

Should You Build Your Idea?

My first mistake with my first startup was assuming that others had the same problem that I experienced. This is a common assumption that many gloss over. Build products that scratch your own itch, right? But many entrepreneurs realize too late that the problems they’re trying to solve are not painful enough to sustain a business.

As product people, we often have many ideas bubbling in our heads. Before getting too excited, test them and decide which one is the most viable to pursue.

Design An Effective Experiment

To get started, break down your idea into testable elements. An effective experiment must clearly define these four elements:

  1. hypothesis,
  2. riskiest assumption,
  3. method,
  4. minimum criterion for success.

At Lean Startup Machine, we’ve created a tool called an Experiment Board, which enables us to easily turn crazy ideas into effective experiments in a few minutes. As you go along, use the board as a framework to design your experiment and track progress. Refer to the templates provided on the board to quickly formulate your hypothesis, riskiest assumption, method and success criterion. You can also watch my video tutorial for more information on designing effective experiments.

Construct a Hypothesis

Every experiment starts with a hypothesis. Start by forming a customer-problem hypothesis. Once it is validated, you can go on to form a problem-solution hypothesis.

  1. Define your customer.

    Which customer segment experiences the most pain? They are your early adopters, and you should target them first. These are the people who have the problem you’re solving for, and they know they have the problem and are trying to solve it themselves and are dying for a better way! Most people have trouble identifying these customers. If you do, too, then just segment your potential customer base by level of pain and differentiating characteristics, such as lifestyle and environmental factors. Being specific will reduce the time it takes to run through experiment cycles; once you’ve tested against that one segment and found that the problem doesn’t resonate with them, you can quickly pivot to test another customer segment. In the long run, having a clear idea of who you’re building for will help you maintain a laser focus on what to prioritize and what to dismiss as noise.
  2. Define the problem.

    What problem do you believe you are solving for? Phrase it from your customer’s perspective. Too often, people phrase this from the perspective of their own lofty vision (“The Web needs to be more human”) or from a business point of view (“Customers don’t use our service enough”). Also, avoid being too broad (“People don’t recycle”). These mistakes will make your hypothesis hard to test with a specific customer, and you’ll find yourself testing for a sentiment or an opinion in interviews, rather than a solvable problem. If you have trouble, phrase the problem as if your friend was describing it you.
  3. Form a hypothesis.

    Brainstorm on a few customers and problems to consider all the possibilities. Then, combine the customer and problem that you want to focus on with this sentence: “I believe customer x has a problem achieving goal y.” You have just formed a testable hypothesis!

Identify Your Riskiest Assumption

Now that you have formed a customer-problem hypothesis, poke some holes and extract the riskiest assumption to be tested. Start by brainstorming on a few core assumptions. These are the assumptions that are central to the viability of your hypothesis or business. Think of an assumption as the behavior, mentality or action that needs to be true in order to validate the hypothesis.

Ask your team members, boss or friends to suggest any assumptions that you may have overlooked. After listing a few, identify the riskiest one. This is the core assumption that you are most uncertain about and have the least amount of data on. Testing the riskiest assumption first will speed up the experiment cycle. If the riskiest assumption is invalidated, then the hypothesis will be invalid, and you will have saved your company from going down the wrong path.

Choose a Method

After identifying the most critical aspect of your idea to test, determine how to test it. You could conduct three kinds of experiments before getting into wireframes. It’s best to start by gathering information firsthand through exploration. But you could choose a different method depending on your level of certainty and the data.

  1. Exploration
    Conduct qualitative interviews to verify and deepen your understanding of the problem. Even though you experience the problem yourself, you don’t know how big it is or who else has it. By conducting exploratory interviews first, you might realize that the opportunity isn’t as big as you had thought or that a bigger problem could be solved instead.
  2. Pitch

    Make sure the solution would actually provide value by selling the concept to customers before building the product. This will measure their level of determination to solve the problem for themselves. A potential customer not taking a certain action to use your service, like paying a small deposit or submitting an email address, indicates that the problem is not painful enough or that you haven’t found the right solution.
  3. Concierge

    Personally deliver the service to customers to test how satisfied they are with your solution. Did your value proposition meet their expectations? What was useful for them? What could have been done better? How likely are they to return or recommend the service to a friend? These are all insights you can discover in this step.

Set a Minimum Criterion for Success

Before running the experiment, decide up front what result will constitute success and what result will constitute failure. The minimum criterion for success is the weakest outcome you will accept to continue allocating resources and pursuing the solution. Factors like budget, opportunity cost, size of market, level of demand and business metrics all play into it.

The criterion is usually expressed as a fraction:

“I expect x number of people out of the y number of people in the experiment to exhibit behavior z.”

I like to set the criterion according to how big I think the problem is for that customer segment, and then determine how much revenue it would have to generate in order for me to keep working on it. At this point, statistical significance is not important; if your target customer segment is very specific, then testing with 10 of them is enough to start seeing a pattern.

Once you have validated the hypothesis with a small sample of the customer segment, then you can scale up the experiments to test with larger sample sizes or with other segments.

Run the Experiment

Once you have defined these elements, you are ready to run the experiment! Have team members look at your Experiment Board and confirm whether they agree with what you’re testing. This will hold you and the team accountable to the results, so that there are no subjective arguments afterwards.

Analyze the Results and Decide on Next Steps

After gathering data from your target customers, document the results and your learning on the Experiment Board. Did the results meet your criterion for success? If so, then your hypothesis was valid, and you can move forward to test the next risk with the product. If not, then you need to form a new hypothesis based on your learning to get closer to something that holds true. Track your progress over time on the Experiment Board to get a holistic picture of your validated learning and to continually make informed decisions.

Test and repeat. You’re on your way to creating a great product that people want!

(al, il)


© Grace Ng for Smashing Magazine, 2014.

Continue reading

Read More

Building The Web App For Unicef’s Tap Campaign: A Case Study


  

Since a smartphone landed in almost everyone’s pocket, developers have been faced with the question of whether to go with a mobile website or a native app.

Native applications offer the smoothest and most feature-rich user experience in almost every case. They have direct access to the GPU, making layer compositions and pixel movements buttery-smooth. They provide native UI frameworks that end users are familiar with, and they take care of the low-level aspects of UI development that developers don’t have time to deal with.

When eschewing an app in favor of a mobile website, developers often sacrifice user experience, deep native integration and a complex UI in favor of SEO and accessibility. But now that JavaScript rendering engines are improving immensely and GPU-accelerated canvas and CSS animations are becoming widely supported, we can start to consider mobile websites a primary use case.

Unicef’s latest campaign, Tap, presented us with the challenge of combining the accessibility of a mobile website with the native capabilities, UI and overall experience that someone would expect of a native app. Our friends at Droga5 came to us with a brief to create a mobile experience that tracks how long a user avoids using their phone.

Unicef's 2014 Tap campaign
Unicef’s 2014 Tap campaign presented the challenge of combining the accessibility of a mobile website with the smooth user experience of a native app.

For every 10 minutes that a user gives up their phone, a sponsor would donate a day’s worth of water to children in the developing world. While the user patiently waits, they are presented with real-time and location-based statistics of other users who are sacrificing their precious phone time.

We’ll discuss a few of the biggest challenges here: detecting user activity, achieving performant animations, and building an API integrated with Google Analytics.

Detecting User Activity

Detecting user activity through a mobile browser was an interesting challenge and involved a lot of research, testing and normalization across all types of phones. The slightest differences and inaccuracies between phones became suddenly apparent. To explain the process, we’ll break it down into three categories: user movement, user exiting, and device-sleep prevention.

User Movement

One core piece of functionality is detecting any movement by the user. Fortunately, most mobile browsers today have access to the built-in gyroscope and accelerometer via JavaScript’s DeviceOrientation event. The unfortunate exception is devices running Android 2.3 (Gingerbread), which at the time of writing has roughly a 20% market share. In the end, the project was not worth abandoning due to one version of Android, so we pushed on. This decision proved to be even better than we thought because most devices that run version 2.3 are old, which means less memory, a slower CPU and aging hardware.

To detect movement, we first have to detect an “idle” position. We instruct the user to set their phone down, while we check the readings on the x and y axis. We start a timer with a setInterval, and if that position’s values remain within a 6° range for a few seconds, then we save those values as the device’s idle position. (If the user moves, then we restart the timer again until the phone does not move for a few seconds.) From there, we listen for the DeviceOrientation event and compare the new position’s values to the idle values. If there is a difference, then we fire off a custom user_move event.

The concept was simple to implement, but we found that most devices fluctuate by a couple of degrees when lying still. The sensitivity to movement is quite high, so we first had to determine a threshold above which we could be confident that the user has intentionally moved their device. After some trial and error, we decided on a 12° range of difference (+ and -) from the idle position, on both the x and y axis. If any movement occurs outside of that range, we assume it to be deliberate. Thus, users can bump their phone slightly with no consequence.


this.devOrientHandlerProxy = $.proxy(this.devOrientHandler, this);
window.addEventListener('deviceorientation', this.devOrientHandlerProxy, false);

MovementDetector.prototype.devOrientHandler = function(event) {
   var curr_x = Math.floor(event.beta);
   var curr_y = Math.floor(event.gamma);
   var curr_z = Math.floor(event.alpha);

   var didMove = this.calcMovement(curr_x, curr_y, curr_z, this.movement_threshold);

   if(didMove) {
      this.announceMovement();
   }
}

MovementDetector.prototype.calcMovement = function(new_x, new_y, new_z, threshold) {
   var x_diff = Math.abs(this.x_idle_pos - new_x);
   var y_diff = Math.abs(this.y_idle_pos - new_y);
   var z_diff = Math.abs(this.z_idle_pos - new_z);
   z_diff = z_diff > 180 ? 360 - z_diff : z_diff;

   return x_diff > threshold || y_diff > threshold || z_diff > threshold;
}

As you can see in the first four lines of the calcMovement method, we are obtaining the difference between the idle position and the new position. Because the difference in values could be negative, we make sure to get the absolute value (Math.abs(val)). You’ll notice that the z_diff formula is a bit different. Because the value for z_diff is between 0 and 359, we have to take the absolute difference and then check to see whether the difference is above 130; if so, then we need to subtract that difference from 360.

This gives us the shortest distance between the two points. For example, if the device moves from 359 to 10, then the shortest distance would be 11. Finally, we check to see whether any of those three values (x_diff, y_diff, or z_diff) are greater than the threshold; if so, then we announce a user_move event.

Movement detection on iOS and Android
Movement detection on iOS and Android (Samsung Galaxy S3 and HTC One). (View large version)

We had to test extensively across both Android and iOS devices. iOS was straightforward, whereas we found subtle differences between Android versions and manufacturers, especially with the stock browser. Certain devices would jump dramatically between values on the z-axis. Thus, we decided not to consider any movement on the z-axis in our detection — meaning that users could slide a phone laterally on a tabletop with no consequence.

User Exiting

Another action that we wanted to detect was the user exiting the browser, to signal their intention to end the experience. We had to listen for a couple of events via the PageHide and PageVisibility API. (PageHide or PageVisibility is available in Android only in later versions — in the stock browser in version 4.3+, and in Chrome 4+. iOS 6 has PageHide, and iOS 7 has PageVisibility.)

We knew we couldn’t detect across the board, but we felt that implementing it for browsers that support it would be worthwhile. The following matrix shows which mobile browsers support PageHide and PageVisibility:

Devices PageHide event PageVisibility API
iOS 6.0 Safari
iOS 7.0 Safari
iOS 6.0 Chrome
iOS 7.0 Chrome
Android 2.3 — 4.2 stock browser
Android 4.3 stock browser
Android 4.4 stock browser
Android 4.0+ Chrome

Sleep Prevention

Keeping the device awake was the final core piece of functionality that we needed to detect user activity. This was crucial because the idea of the campaign is for users to stay away for as long as they possibly can. By default, all phones enter sleep mode after a few minutes. Some phones can be manually set to never sleep, or the user could keep it plugged in, but we could not rely on either of those options.

We had to think of interesting workarounds. iOS and Android had to be treated differently.

For iOS 6 devices, we make use of HTML5 audio and load a silent MP3 file asynchronously that loops endlessly during game play. We simply added the loop attribute, set to true, to our <source> element. For Android devices, we piggyback on what we do for iOS 6. However, Android’s display turns off after a few minutes even when an audio file is playing. Fortunately, unlike iOS, Android allows for inline video.

So, we run the same createMedia loop method shown above, but this time loading a 10-minute silent video with the <video> element, placed outside of the viewport. We found that the loop attribute doesn’t always work with inline video across Android devices, so we use HTML5’s media ended event instead. By looping a hidden video, we are able to keep Android devices from going to sleep.

Here is some sample code:


//for iOS 6
var media_type = 'audio';
var media_file = 'silence.mp3';

//for Android
var media_type = 'video';
var media_file = 'silence.mp4';

ExampleClass.prototype.createMediaLoop = function(media_type, media_file) {
   this.mediaEl = document.createElement(media_type);
   this.mediaEl.className = 'mediaLoop';
   this.mediaEl.setAttribute('preload', 'auto');

   var mediaSource = document.createElement('source');
   mediaSource.src = media_file;

   switch(media_type) {
      case 'audio':
      //create an audio element in iOS 6
      //and play a silent MP3 file
      this.mediaEl.loop = true;
      mediaSource.setAttribute('type', 'audio/mpeg');
      break;
      case 'video':
      //create a video element for Android devices
      //and play a silent video file
      mediaSource.setAttribute('type', 'video/mp4');
      var _self = this;

      this.mediaEl.addEventListener('ended', function() {
          _self.mediaEl.currentTime = 0;
          _self.mediaEl.play();
      }, false);
      break;
   }

   this.mediaEl.appendChild(mediaSource);
   document.body.appendChild(this.mediaEl);
   this.mediaEl.volume = 0;
   this.start();
}   

iOS 7 is much easier. Thanks to a UI update in the browser, the address bar always remains on screen, unlike in iOS 6. So, we call an update to the browser’s URL, running the method every 20 seconds, thus preventing sleep mode.


setInterval(function(){
   window.location.href = 'tap.unicefusa.org';
   setTimeout(function(){
      window.stop();
   },0);
}, 2e4);

We cannot use this method for iOS 6 because the user would notice the address bar slide into the view and then slide back out.

Animations

Animations are important in reinforcing the theme of water and making the experience fun. Whether we were creating a water-ripple effect, bubbles or waves, we isolated each animation and programmed different approaches to achieve the best result. Knowing that we had to do this for a slew of browsers by various manufacturers, we took the following into consideration:

  • Performance

    Do frames get dropped when testing against supported devices? How does GPU rendering compare to CPU rendering?
  • Value added

    How much does the animation really add to the experience? Could we conceivably drop it?
  • Loading size

    How much does the animation add to the website’s overall load? Does it require a library?
  • Compatibility with iOS 6+ and Android 4+

    Does it require complex fallbacks?

Bubbles

Let’s first look at bubbles, which animate from bottom to top. The design called for floating bubbles, whose size, focus and opacity would provide a sense of depth within the environment. We decided to test a few approaches, but these are the main two we were curious about:

  • Animating DOM elements using hardware-accelerated CSS 3-D transforms (transform: translate3d(x, y, z));
  • Rendering all circles on a 2-D canvas element.

Note: Animating via the top/left properties is not an option due to the lack of subpixel rendering and the long time to paint each frame. Paul Irish has written more about this.

We tested several approaches to find the best method of animating bubbles.
We tested several approaches to find the best method to animate the bubbles. (View demo)

We pulled off the canvas method by creating two transparent canvases: one on top of the content and one below. We create our bubbles as objects with randomized properties in memory (diameter, speed, opacity, etc.). At each frame, we clear the canvas via context.clearRect(0, 0, width, height);, and then draw each bubble to the screen. To create a floating, bubble-like movement, we need to change each bubble’s x and y values in each frame. For the y-axis, we subtract a constant value in each frame: b.y = b.y - b.speed;.

In this case, we determine a unique speed for each bubble using (Math.random() / 2) + 0.1). For the x-axis, we need a smooth repetitive oscillation, which we can achieve by taking the sine value of its frame count: b.x = b.startX + Math.sin(count / b.amplitude) * 50;. You can view the extracted code and the demo.

The DOM-based implementation using CSS 3-D transforms follows a very similar method. The only big differences are that we dynamically create and insert DIV elements at the beginning and, using Modernizr, apply vendor-prefixed translate3d(x, y, z) properties on each animation frame. You can view the extracted code and the demo.

To optimize performance, we considered a canvas implementation because GPU acceleration has been enabled for the browsers we support (iOS 5 with its Nitro JavaScript, and Chrome for Android 4+); however, we noticed severe issues with aliasing and the frame rate on Android devices.

Timeline profiles using the canvas element and CSS 3-D transforms
Timeline profiles using the canvas element and CSS 3-D transforms (View large version)

We also did some performance profiling in Chrome’s emulation mode on the desktop (better methods exist for doing more granular remote testing on a mobile device). The difference in results between the two was still interesting: A GPU-accelerated 2-D canvas showed better performance than GPU-accelerated CSS transforms, especially with a higher number of DOM elements, due to the rendering time for each one and the recalculation of styles.

We used CSS 3-D transforms to animate the bubbles.
After carefully considering several techniques, we went with CSS 3-D transforms to animate the bubbles. (View large version)

In the end, we used CSS 3-D transforms. We only need to animate 16 bubbles at a time, and the CPU and GPU on supported devices collectively seem to handle the overhead just fine. The performance and anti-aliasing issues with canvas rendering on old Android devices were the determining factors. At the time of writing and in this particular case, canvas wasn’t an option for us, but browser vendors certainly are not ignoring it, and the latest rendering engines of mobile browsers have seen massive improvements.

Waves

We use wave animations throughout both the mobile and desktop experience — specifically, as a design detail to reinforce the water theme and as a transition to wash away old content and bring in new content. As with the bubbles, we explored using both canvas and CSS-based animations. And likewise, CSS animations were the way to go. A wave PNG costs us only 7 KB, and we get much better performance from mobile browsers across the board.

As with bubbles, we explored both canvas and CSS-based animations.
As with bubbles, we explored using both canvas and CSS-based animations. (View demo)

Our isolated demo of the desktop implementation (which is similar to mobile) is really quite simple. Each wave is a background image set with background-repeat:repeat-x and a looping keyframe animation that moves left with linear easing. We make the speed of the waves in front slightly faster and the waves in the back slower to simulate depth. You can view the code, which uses Sass and Compass.

We also tried a very different vanilla JavaScript approach by creating a wave oscillation. We used a wave oscillator algorithm created by Ken Fyrstenberg Nilsen, adjusting it to suit our needs. You can view this demo, too.

We abandoned the oscillation effect because of poor performance on old Android devices.
We abandoned the oscillation effect because of poor performance on old Android devices.

The effect turned out to be really nice and organic, but the performance was lacking on old Android devices, so we abandoned the approach altogether.

API

During gameplay, we wanted to provide some insightful facts and location-based statistics, as well as encourage users to keep playing. We used several APIs, combining them with scores from our database.

The back end is run off of the Laravel PHP framework and a few APIs. For location-based statistics, we could have asked the user for their location via HTML5 geolocation, but we wanted a more seamless experience and didn’t want to interrupt the user with a confirmation dialog box. We don’t need a precise location, so we opted for MaxMind’s GeoIP2 service. This service gives us enough data to get the user’s rough location, which we can combine with other services and data.

We also want people to know that they are a part of a bigger community, so we wanted to provide statistics based on website analytics. The obvious choice was to use Google Analytics’ new API, as well as its newer Real Time Reporting API.

Because we have access to different kinds of data, we are able to display facts that are relevant to the user. For example, a user in the US would get a statistic on how their state compares to other states in the country, according to Google Analytics. By using Google’s Real Time Reporting API, we see how many active users are on the website, and we display that to the user, illustrating other people’s participation. In our PHP code, we use Google Analytics for Laravel 4 which works great and handles a lot of code, making it much easier to get data back from Google Analytics’ API.


$ga_realtime_metric = 'ga:activeVisitors';
$ga_service = Analytics::getService();
$optParams = array('dimensions' => $this->ga_dimensions, 'sort' => '-'. $this->ga_realtime_metric);

$results = $ga_service->data_realtime->get([google profile id], $ga_realtime_metric, $optParams);   

We also use GeoIP2’s service to record people’s times, to display the score of a particular city or state.

To prepare for spikes in traffic, to stay within each API’s rate limit (Google Analytics’ limit is 50,000 requests per project per day) and to optimize speed, we cache some data at 10-minute intervals. We cache certain other data, such as GeoIP2’s, even longer (every five days) because it doesn’t change that often.

Due to the ever-growing number of scores, the queries to retrieve certain statistics would take longer than is acceptable for each user. So, we set up a couple of CRON jobs to set these queries to run every 10 minutes, caching the updated statistics on the server.

When a user hits the website, an AJAX call to the server asks for the cached data, which is returned to the browser in a big JSON response. This increases loading times considerably and keeps us within the rate limit for each API that we use.

Conclusion

As mobile browsers continue to improve, offering new features and enhancing performance, new opportunities like this will arise. It’s always important to question whether you should build a native app or a Web app, and keep in mind the pros and cons of each, especially because the differences in their capabilities are narrowing rapidly.

Developing our Tap app for the Web not only was more affordable (with two Web developers working on a single code base, as opposed to a developer for each platform), but made it more accessible and easily shareable. We’ll never know, but we’re confident that we would not have reached 3.7 million website visits in the first month had we gone the native route. About 18% of those visits came from Safari in-app browsers — meaning that people had clicked on a link in their Facebook or Twitter feed and were taken directly into the experience. A native app would have seriously hampered that ability to share the experience or message quickly.

We hope this article has been helpful in illustrating both the thought process of Web versus native and the technical hurdles involved in building Tap. The project was really fun and challenging, it was for a good cause, and it introduced a unique mechanism for donating, one that we hope to see propagate and manifested in new and creative ways.

Further Ressources

(al, il, ml)


© Nick Jonas und Francis Villanueva for Smashing Magazine, 2014.

Continue reading

Read More

How To Build A Ruby Gem With Bundler, Test-Driven Development, Travis CI And Coveralls, Oh My!


  

Ruby is a great language. It was designed to foster happiness and productivity in developers, all the while providing tools that are effective and yet focused on simplicity. One of the tools available to the Rubyist is the RubyGems package manager. It enables us both to include “gems” (i.e. packaged code) that we can reuse in our own applications and to package our own code as a gem to share with the Ruby community. We’ll be focusing on the latter in this article.

I’ve written an open-source gem named Sinderella (available on GitHub), and in this article I’ll go through all of the steps I took to write the code (including the test-driven development process) and how I prepared it for release as a gem via RubyGems. I’ll also show you how to set up your tests to run through a continuous integration (CI) server using the popular Travis CI service.

In case you’re unfamiliar with CI, it refers to the process of merging code with a central repository, with the aim of preventing integration problems down the road in a project’s life cycle. (If you use a version control system such as git and a decentralized code repository such as GitHub, then you might already be familiar with these concepts.)

Finally, I’ll show you how to use Coveralls to measure the code coverage of your tests and to obtain a statistical history of your commits.


Image credit: The Ruby and Bundler logos, along with the Travis CI mascot.

What We’ll Cover

What Does Sinderella Do?

As described in the README on GitHub, Sinderella allows the author to “pass a code block to transform a data object for a specific period of time.” So, if we provide data like the following…


{ :key => 'value' }

… then we could, for example, convert it to the following for a set period of time:


{ :key => 'VALUE' }

Once the time period has expired, the data is returned to its normal state.

Sinderella is made up of two files: the main application and a data store that holds the original and transformed data.

Later in this article, I’ll describe my development process for creating the gem, and we’ll review some of the techniques required to produce a robust and stable gem.

What We Won’t Cover

To be clear, this article is focused on creating a Ruby gem using Bundler and on following best practices, such as test-driven development and CI.

We won’t cover how to write Ruby code or how we developed the Sinderella gem. Nor will we cover how to write RSpec tests (although we will demonstrate how to set up RSpec). RSpec is a detail of implementation and can be swapped out for any testing library that you deem appropriate.

Additional Requirements

To get started, you’ll need to register for accounts with the following services:

Registering for these services is free. Travis CI is free for all open-source projects (which this will be). You may pay for a Pro account, which allows you to set up CI for your private code repositories, but that’s not needed for what we’ll be doing here.

You’ll also need to be comfortable working in the command line. You don’t have to be a Unix shell scripting wizard, but I’ll be working here exclusively in a shell environment (specifically, using the Terminal on Mac OS X) to do everything, including running shell commands, opening multiplexers (such as tmux) and editing code (with Vim).

Which Version Of Ruby To Use

Ruby has many different flavors:

  • Ruby (also known as Matz’s Ruby Interpreter) is the original language, written in C.
  • Rubinius is an implementation of Ruby that is written mainly with Ruby.
  • JRuby is an implementation of Ruby built on top of the Java Virtual Machine (JVM), with Java.

I deliberately used JRuby to implement Sinderella because part of the gem’s code relies on “threads,” and MRI doesn’t provide true threading.

JRuby provides a native thread implementation because it is built on top of the JVM. But really, using any of the above variations would have been fine.

Unfortunately, though, it’s not all clear sailing with JRuby. Quite a few gems still use C extensions (i.e. code written in C that Ruby can import). At the moment, you can enable a flag in JRuby that allows it to use C extensions, but doing so is merely a temporary solution because this option is expected to be removed from JRuby in future releases.

This could be an issue, for example, if you’re using Pry (a replacement for Ruby’s irb REPL). Pry works fine with JRuby, but you wouldn’t be able to take advantage of the equally amazing pry-plus extension, which offers many extra debugging capabilities, because some of its dependencies rely on C extensions.

I’ve worked around this limitation somewhat by using pry-nav. It’s not as good and can be a little buggy in places when used under JRuby, but it gets the job done.

Bundler

To help us create the gem, we’ll use the popular Bundler gem.

Bundler is primarily designed to help you manage a project’s dependencies. If you’ve not used it before, then don’t worry because we’ll be taking advantage of a lesser known feature anyway, which is its ability to generate a gem boilerplate. (It also provides some other tools that will help us manage our gem’s packaging, which I’ll get into in more detail later on.)

Let’s begin by installing Bundler:


gem install bundler

Once Bundler is installed, we can use it to create our gem. But before doing that, let’s review some other dependencies that we’ll need.

Dependencies

Developing the Sinderella gem requires five dependencies. Four are needed during the development process and won’t be needed in production. The fifth is a “hard” dependency, meaning that it is needed for the Sinderella gem to function properly.

Of these dependencies, Crimp and RSpec are specific to Sinderella. So, when developing your own gem, you would likely replace them with other gems.

RubyGems

We need to install RubyGems in order to take advantage of the package manager and its built-in gem commands (which Bundler will wrap with its own enhancements).

RSpec

RSpec is a testing framework for the Ruby programming language. We’ll cover this in more detail later on in the article.

When building your own gem, you might want to swap RSpec for a different testing tool. Another popular option is Cucumber.

Guard

Guard is a command-line tool that responds to events. We’ll be using it to more easily write code for test-driven development. It works by monitoring files that you tell it to watch and then, when it notices changes to those files, triggering some command that you specify based on the type of file that was changed.

This comes in really handy when you’re running tests in a multiplexer such as tmux or when using a terminal such as iTerm2 (which supports multiple terminal windows being open at once), because while you’re editing the code in one terminal, you can get instant feedback to breaking tests as you work on the code. This is known as a tight feedback loop (more on this later).

Pry

Pry is a replacement REPL for Ruby’s standard irb. It offers everything the standard irb does but with a lot of additional features. It’s useful for testing code to see how it works and whether the Ruby interpreter fails to run it. It’s also useful for debugging code when something doesn’t work the way you expect.

It didn’t have much of a presence in the development of Sinderella, but it is such an important tool that I felt it deserved more than a cursory mention. For example, if you’re unsure of how a particular Ruby feature works, you could test drive it in Pry.

If you want to learn more about how to use it, then watch the screencast on Pry’s home page.

Crimp

Crimp is a gem released by the BBC that allows you to convert a piece of data into a MD5 hash.

Generating A Boilerplate

OK, now we’ve finally gotten to the point where we can generate the set-up files that will configure our gem file.

As mentioned, Bundler has the tools to generate the foundation of a gem so that we don’t have to type it all out by hand.

Now, open up the terminal and run the following command:


bundle gem sinderella

When that command is run, the following is generated:


❯ bundle gem sinderella
  create  sinderella/Gemfile
  create  sinderella/Rakefile
  create  sinderella/LICENSE.txt
  create  sinderella/README.md
  create  sinderella/.gitignore
  create  sinderella/sinderella.gemspec
  create  sinderella/lib/sinderella.rb
  create  sinderella/lib/sinderella/version.rb
Initializing git repo in /path/to/Sinderella

Let’s take a moment to review what we have.

Folder Structure

Bundler has automatically created a lib directory for us, which holds a single Ruby file named after our project. The name of the directory is extracted from the name provided via the bundle gem command.

Be aware that if you specify a hyphen (-) in the gem’s name, then Bundler will create a deeper folder structure by using the hyphen as a delimiter. For example, if your command looks like bundle gem foo-bar, then the following directory structure would be created:


├── lib
│   └── foo
│       ├── bar
│       │   ├── bar.rb
│       │   └── version.rb
│       └── bar.rb

This is actually quite useful when you’re producing multiple gems that are all namespaced under a single project. For a real-world example of this, look at BBC News’ GitHub repository, which has multiple open-source gems published under the namespace alephant.

gemspec

The gemspec file is used to define the particular configuration of your gem. If you weren’t using Bundler, then you would need to manually create this file (according to RubyGems’ documentation).

Below is what Bundler generates for us:


# coding: utf-8
lib = File.expand_path('../lib', __FILE__)
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require 'sinderella/version'

Gem::Specification.new do |spec|
  spec.name          = "sinderella"
  spec.version       = Sinderella::VERSION
  spec.authors       = ["Integralist"]
  spec.email         = ["mark.mcdx@gmail.com"]
  spec.summary       = %q{TODO: Write a short summary. Required.}
  spec.description   = %q{TODO: Write a longer description. Optional.}
  spec.homepage      = ""
  spec.license       = "MIT"

  spec.files         = `git ls-files -z`.split("\x0")
  spec.executables   = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
  spec.test_files    = spec.files.grep(%r{^(test|spec|features)/})
  spec.require_paths = ["lib"]

  spec.add_development_dependency "bundler", "~> 1.5"
  spec.add_development_dependency "rake"
end

As you’ll see later, this is a basic outline of the final gemspec file that we’ll need to create. We’ll end up adding to this file some of the other dependencies that our gem will need to run (both development and production dependencies).

For now, note the following details:

  • $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)

    This adds the lib directory to Ruby’s load path, which makes require’ing files elsewhere in the code a little cleaner.
  • require 'sinderella/version'

    This loads in a version.rb file, which was generated when Bundler constructed our boilerplate. This file serves as a way to implement semantic versioning in our gem releases. Every time we release the gem, we’ll need to update the version number; then, when we run the particular Bundler command to release the gem, it will automatically pull in the updated value to our gemspec file.
  • Gem::Specification.new do |spec|

    Here, we define a new specification and include properties such as the name of the gem, the version number (see the previous point), a list of the authors of the gem and a contact email address. We can also include some descriptive text about the gem.
  • Next, we define the files to include in the gem. Any executable files found are injected dynamically into the file by looping through a bin directory (if one is found). We also dynamically inject a list of test files (which we’ll see later on when we create a spec folder to hold the tests that will ensure that the gem works as expected).
  • Finally, we define the dependencies, including both runtime and development dependencies. At the moment, there is only the latter, but soon enough we’ll have one runtime dependency to add.

The RubyGems guides has full details on the specification. You could configure a whole host of settings, but Bundler helps us by defining the essential ones.

Gemfile

In a typical Ruby project, you’ll find that the Gemfile is filled with a list of dependencies, which Bundler then collates and installs for you. In this instance, because we’re generating a gem and not writing a standard application, our Gemfile will actually be pretty bare, made up of two lines: one to tell Bundler where to source the gems from, and the other to inform Bundler that the dependencies are listed in the gemspec file instead.

Rakefile

Again, in a typical Ruby application, a Rakefile will contain many different tasks (written in Ruby) that you can execute via the command line. In this case, a one-line Rakefile has been provided that loads bundler/gem_tasks. That in turn loads additional rake commands that Bundler adds to make it easier to build and deploy your gem. We’ll see how to use these commands later.

LICENSE.txt

Because we’re releasing code that could potentially be used by other developers, Bundler generates an MIT licence by default and dynamically injects the current year and your user name into it.

Feel free to either delete it or replace it with another license if the MIT one doesn’t fit your needs, although it’s pretty standard and relevant to most projects.

README

Lastly, Bundler has taken the tediousness out of generating a README file. It includes TODO messages wherever relevant, so that you know what needs to be manually added before the gem can be built (such as a description of the gem and a code example that shows how you expect the gem to be used). It also automatically generates installation instructions and a section on how other developers can fork your code and contribute new features and bug fixes.

One other benefit of Bundler is that it delivers a consistent code base across all gems you create. All gems will have the same structure, and the consistency across content such as the README file will make it easier for users who integrate more than one of your gems to understand them.

Test-Driven Development

Test-driven development (TDD) is the process of building code on top of supporting tests. Sinderella was developed using its principles.

The guiding steps are “red, green, refactor,” and TDD fundamentally breaks down as the following:

  1. Write a test.
  2. Run the test and watch it fail (because there is no code yet for it to pass).
  3. Write the least amount of code to pass the test (literally, hack it together).
  4. Refactor the code so that it’s cleaner and better written.
  5. If the test has failed through refactoring, then start the red, green, refactoring process again.

This is sometimes referred to as a tight feedback loop: getting quick or instant feedback on whether code is working.

By writing the tests first, you ensure that every line of code exists for a reason. This is an incredibly powerful principle and one you should recall when caught in a debate over whether TDD “sucks” or “takes too long.”

Starting a project with tests can feel daunting. But in addition to ensuring that every line of code exists for a reason, it provides an opportunity for you to properly design the APIs.

RSpec

As for writing tests for Sinderella, I chose to use RSpec, which is described thus on its website:

RSpec is a testing tool for the Ruby programming language. Born under the banner of behaviour-driven development, it is designed to make test-driven development a productive and enjoyable experience

In order to use RSpec in our gem, we’ll need to update the gemspec file to include more dependencies:


spec.add_development_dependency "rspec"
spec.add_development_dependency "rspec-nc"
spec.add_development_dependency "guard"
spec.add_development_dependency "guard-rspec"
spec.add_development_dependency "pry"
spec.add_development_dependency "pry-remote"
spec.add_development_dependency "pry-nav"

As you can see, we’ve added RSpec to our list of dependencies, but we’ve also included rspec-nc, which provides native notifications on Mac OS X (rspec-nc is a nicety and not essential to produce the gem). Having notifications at the operating-system level can be quite handy, allowing you to do other things (perhaps check email) while tests run in the background.

We’ve also added (as you would expect) guard as a dependency, as well as guard-rspec, which Guard will need in order to understand how to handle RSpec-specific requests. This suite of Pry tools will debug any problems we come across and will be useful for any gems you develop in future.

RSpec Rake Tasks

Now that we’ve updated gemspec to include RSpec as a dependency, we’ll need to add an RSpec-related Rake task to our Rakefile, which will enable us (manually) or Guard to execute the task and run the RSpec test suite:


require 'rspec/core/rake_task'
require 'bundler/gem_tasks'

# Default directory to look in is `/specs`
# Run with `rake spec`
RSpec::Core::RakeTask.new(:spec) do |task|
  task.rspec_opts = ['--color', '--format', 'nested']
end

task :default => :spec

In the updated version of Rakefile above, we are loading an additional file that is packaged with RSpec (require 'rspec/core/rake_task'). This new file adds some RSpec-related modules and classes for us to use.

Once this code has loaded, we create a new instance of the RakeTask class (created when we loaded rspec/core/rake_task) and pass it a code block to execute. The code block we pass will define the options for our RSpec test suite.

Spec Files

Now that the majority of the RSpec test suite configuration is in place, the last thing we need to do is add a test file.

Let’s create a spec directory and, inside that, create sinderella_spec.rb:


require 'spec_helper'

describe Sinderella do
  it 'does stuff' do
    pending # no code yet
  end
end

You’ll see that we’ve included a temporary specification that states that the code “does stuff.” When the test suite is run, then this test will not cause any errors, even though no code has been implemented yet, because we have marked the test as “pending” (an RSpec-specific command). At this point, we’re only interested in getting a barebones set-up in place; we’ll flesh out the tests soon enough.

You may have noticed that we’re also loading another file, named spec_helper.rb. This type of file is typical in an RSpec suite and is used to load any dependencies or libraries that are required for the tests to run. The content of the spec helper file will look like this:


require 'pry'
require 'Sinderella'

All we’ve done here is load Pry (in case we need it for debugging) and the main Sinderella gem code (because this is what we want to test).

Guard And tmux

At this point, we’ve gone over the set-up and preparation of RSpec and Rake (to get our testing framework in place). We also know what Guard is and how it helps us to test the code. Now, let’s go ahead and add a Guardfile to the root directory, with the following contents:


guard 'rspec' do
  # watch /lib/ files
  watch(%r{^lib/(.+)\.rb$}) do |m|
    "spec/#{m[1]}_spec.rb"
  end

  # watch /spec/ files
  watch(%r{^spec/(.+)\.rb$}) do |m|
    "spec/#{m[1]}.rb"
  end
end

This file tells Guard that we’re using RSpec to run our tests. It also defines which directories to watch for changes and what to do when it notices changes. In this case, we’re using regular expressions to match any files in the lib or spec directory and to execute the relevant RSpec command that runs our tests (or to run one specific test).

We’ll see in a minute how to actually run Guard. For now, let’s see how tmux fits this workflow.

tmux

Some developers prefer to have separate applications open (for example, a code editor such as Sublime Text and a terminal application to run tests). I prefer to use tmux to have multiple terminal shells open on one screen and to have Vim open on another screen to edit code. Thus, I can edit code and get visual feedback from the terminal about the state of the tests all on one screen. You don’t need to follow the exact same approach. As mentioned, there are other ways to get feedback, but I have found tmux and Vim to be the most suitable.

So, we have two tmux panes open, one in which Vim is running, and the other in which a terminal runs the command bundle exec guard (this is how we actually run Guard).

That command will return something like the following back to the terminal:


❯ bundle exec guard
09:53:55 - INFO - Guard is using Tmux to send notifications.
09:53:55 - INFO - Guard is using TerminalTitle to send notifications.
09:53:55 - INFO - Guard::RSpec is running
09:53:55 - INFO - Guard is now watching at '/path/to/Sinderella' 

From: /path/to/Sinderella/sinderella.gemspec @ line 1 :

 => 1: # coding: utf-8
    2: lib = File.expand_path('../lib', __FILE__)
    3: $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
    4: require 'sinderella/version'
    5: 
    6: Gem::Specification.new do |spec|

From this point on, you can press the Return key to run all tests at once, which will display the following message in the terminal:


09:57:41 - INFO - Run all
09:57:41 - INFO - Running all specs

This will be followed by the number of passed and failed tests and any errors that have occurred.

Continuous Integration With Travis CI

As mentioned at the beginning, continuous integration (CI) is the process of merging code with a central repository in order to prevent integration problems down the road in a project’s life cycle.

We’ll use the free Travis CI service (which you should have signed up for by now).

Upon first viewing your “Accounts” page in Travis CI, you’ll be presented with a complete list of all of your public GitHub repositories, from which you can select ones for Travis CI to monitor. Then, any time you push a commit to GitHub, Travis CI will run your tests.

Once you have selected repositories, you’ll be redirected to a GitHub “hooks” page, where you can confirm and authorize the configuration.

The Travis CI page for the Sinderella gem is where you can view the entire build history, including both passed and failed tests.

.travis.yml

To complete the configuration, we need to add a .travis.yml file. If you’ve enabled your repository from your Travis CI account and you don’t have a .travis.yml file, then Travis CI will throw an error and complain that you need one. Let’s look at the one we’ve set up for Sinderella:


language: ruby
cache: bundler

rvm:
  - jruby
  - 2.0.0

script: 'bundle exec rake'

notifications:
  email:
    recipients:
      - my@email.com
    on_failure: change
    on_success: never

Let’s go through each property to understand what it does:

  • language: ruby

    Here, we’re telling Travis CI that the language in which we’re writing tests is Ruby.
  • cache: bundler

    This tells Travis CI that we want it to cache the gems we’ve specified. (Running Bundler can be a slow process, and if your gems are unlikely to change often, then you don’t want to keep running bundle install every time you push a commit, because we want our tests to run as quickly as possible.)
  • rvm:

    This specifies the different Ruby versions and engines that we want our tests to run against (in this case, JRuby and MRI 2.0.0).
  • script: 'bundle exec rake'

    This gives Travis CI the command it requires to run the tests.
  • notifications:

    This indicates how we want Travis CI to notify us. Here, we’re specifying an email address to receive the notifications. We’re also specifying that an email should be sent only if a failure has occurred (there’s no point in getting thousands of emails telling us that nothing is wrong).

Preventing a Test Run

If you’re committing a change that doesn’t affect your code or tests, then you don’t want to waste time watching those non-breaking changes trigger a test run on Travis CI (no matter how fasts the tests are).

The easiest way to avoid this is to add [ci skip] anywhere in your commit message. Travis CI will see this and then happily ignore the commit.

Code Coverage And Statistics With Coveralls.io

One last service we’ll use is Coveralls, which you should have already registered for.

Coveralls works with your continuous integration server to give you test coverage history and statistics. Free for open source, pro accounts for private repos.

When you log into Coveralls for the first time, it will ask you to select repositories to monitor. It works like Travis CI, listing all of your repositories for you to enable and disable access. (You can also click a button to resynchronize the repository list, in case you’ve added a repository since last syncing).

To set up Coveralls, we need to add a file that tells Coverall what to do. For our project, we need to add a file to the root directory named .coveralls.yml, in which we’ll include a single line of configuration:


service_name: travis-ci

This tells Coveralls that we’re using Travis CI as our CI server. (If you’ve signed up for a Pro account, then use travis-pro instead.)

We also need to add the Coveralls gem to our gemspec:


spec.add_development_dependency "coveralls"

Finally, we need to include Coveralls’ code in our spec_helper.rb file:


require 'coveralls'
Coveralls.wear!

require 'pry'
require 'sinderella'

Notice that we have to load the code before the Sinderella code. If you load Coveralls after the application’s code has loaded, then it wouldn’t be able to hook into the application properly.

Let’s return to our TDD process.

Skeleton Specification

When following TDD, I prefer to create a skeleton of a test suite, so that I have some idea of the type of API to develop. Let’s change the contents of the sinderella_spec.rb file to have a few empty tests:


require 'spec_helper'

describe Sinderella do
  let(:data) {{ :key => 'value' }}
  let(:till_midnight) { 0 }

  describe '.transforms(data, till_midnight)' do
    it 'returns a hash of the passed data' do
      pending
    end

    it 'stores original and transformed data' do
      pending
    end

    it 'restores the data to its original state after set time' do
      pending
    end
  end

  describe '.get(id)' do
    context 'before midnight (before time expired)' do
      it 'returns transformed data' do
        pending
      end
    end

    context 'past midnight (after time expired)' do
      it 'returns original data' do
        pending
      end
    end
  end

  describe '.midnight(id)' do
    it 'restores the data to its original state' do
      pending
    end
  end
end

Notice the pending command, which is provided by RSpec and allows the tests to run without throwing an error. (The suite will highlight pending tests that still need to be implemented so that you don’t forget about them.)

You could also use the fail command, but pending is recommended for unimplemented tests, particularly before you’ve written the code to execute them. Relish demonstrates some examples.

From here on, I follow the full TDD process and write the code from the outside in: red, green, refactor.

For the first test I wrote for Sinderella, I realized that my code needs a way to create an MD5 hash from a data object, and that’s when I reached for the BBC News’ gem, Crimp. Thus, I had to update the gemspec file to include a new runtime dependency: spec.add_runtime_dependency "crimp".

I won’t go step by step into how I TDD’ed the code because it isn’t relevant to this article. We’re focusing more on the principles of creating a gem, not on details of implementation. But you can get all of the gruesome details from the public list of commits in Sinderella’s GitHub repository.

Also, you might not even be interested in the RSpec testing framework and might be planning on using a different framework to write your gem. That’s fine. Anyway, what follows is the full Sinderella specification file (as of February 2014):

sinderella_spec.rb


require 'spec_helper'

describe Sinderella do
  let(:data) {{ :key => 'value' }}
  let(:till_midnight) { 0 }

  def create_new_instance
    @id = subject.transforms(data, till_midnight) do |data|
      data.each do |key, value|
        data.tap { |d| d[key].upcase! }
      end
    end
  end

  describe '.transforms(data, till_midnight)' do
    it 'returns a MD5 hash of the provided data' do
      create_new_instance
      expect(@id).to be_a String
      expect(@id).to eq '24e73d3a4f027ff81ed4f32c8a9b8713'
    end
  end

  describe '.get(id)' do
    context 'before midnight (before time expired)' do
      it 'returns the transformed data' do
        Sinderella.stub(:check)
        create_new_instance
        expect(subject.get(@id)).to eq({ :key => 'VALUE' })
      end
    end

    context 'past midnight (after time expired)' do
      it 'returns the original data' do
        create_new_instance
        Sinderella.reset_data_at @id
        expect(subject.get(@id)).to eq({ :key => 'value' })
      end
    end
  end

  describe '.midnight(id)' do
    context 'before midnight (before time expired)' do
      it 'restores the data to its original state' do
        Sinderella.stub(:check)
        create_new_instance
        subject.midnight(@id)
        expect(subject.get(@id)).to eq({ :key => 'value' })
      end
    end
  end
end

data_store_spec.rb


require 'spec_helper'

describe DataStore do
  let(:instance)    { DataStore.instance }
  let(:original)    { 'bar' }
  let(:transformed) { 'BAR' }

  before(:each) do
    instance.set({
      :id => 'foo',
      :o riginal => original,
      :transformed => transformed
    })
  end

  describe 'set(data)' do
    it 'stores original and transformed data' do
      expect(instance.get('foo')[:original]).to eq(original)
      expect(instance.get('foo')[:transformed]).to eq(transformed)
    end
  end

  describe 'get(id)' do
    it 'returns data object' do
      expect(instance.get('foo')).to be_a Hash
      expect(instance.get('foo').key?(:original)).to be true
      expect(instance.get('foo').key?(:transformed)).to be true
    end
  end

  describe 'reset(id)' do
    it 'replaces the transformed data with original data' do
      instance.reset('foo')
      foo = instance.get('foo')
      expect(foo[:original]).to eq(foo[:transformed])
    end
  end
end

Passing Specification

Here is the output of our passed test suite:


❯ rake spec
/path/to/.rubies/jruby-1.7.9/bin/jruby -S rspec ./spec/data_store_spec.rb ./spec/sinderella_spec.rb --color --format nested

DataStore
  set(data)
    stores original and transformed data
  get(id)
    returns data object
  reset(id)
    replaces the transformed data with original data

Sinderella
  .transforms(data, till_midnight)
    returns a MD5 hash of the provided data
  .get(id)
    before midnight (before time expired)
      returns the transformed data
    past midnight (after time expired)
      returns the original data
  .midnight(id)
    before midnight (before time expired)
      restores the data to its original state

Finished in 0.053 seconds
7 examples, 0 failures

Design Patterns

According to Wikipedia:

A design pattern in architecture and computer science is a formal way of documenting a solution to a design problem in a particular field of expertise.

Many design patterns exist, one of which in particular is usually frowned on, the Singleton pattern.

I won’t debate the merits or problems of the Singleton design pattern, but I opted to use it in Sinderella to implement the DataStore class (which is the object that stores the original and transformed data), because what would be the point of having multiple instances of DataStore if the data is expected to be shared from a single access point?

Luckily, Ruby makes it really easy to create a Singleton. Just add include Singleton in your class definition.

Once you’ve done that, you will be able to access a single instance of your class only via an instance property — for example, MyClass.instance.some_method().

We saw the specification (or test file) for DataStore in the previous section. Below is the full implementation of DataStore:


require 'singleton'

class DataStore
  include Singleton

  def set(data)
    hash_data = {
      :o riginal    => data[:original],
      :transformed => data[:transformed]
    }

    container.store(data[:id], hash_data)
  end

  def get(id)
    container.fetch(id)
  end

  def reset(id)
    original  = container.fetch(id)[:original]
    hash_data = {
      :o riginal => original,
      :transformed => original
    }

    container.store(id, hash_data)
  end

  private

  def container
    @store ||= Hash.new
  end
end

Badges

You might have seen some nice green badges in your favorite GitHub repository, indicating whether the tests associated with the code passed or not. Adding these to the README is straightforward enough:


[![Build Status](https://travis-ci.org/Integralist/Sinderella.png?branch=master)](https://travis-ci.org/Integralist/Sinderella) 

[![Gem Version](https://badge.fury.io/rb/sinderella.png)](http://badge.fury.io/rb/sinderella)

[![Coverage Status](https://coveralls.io/repos/Integralist/Sinderella/badge.png)](https://coveralls.io/r/Integralist/Sinderella)

The first badge is provided by Travis CI, which you can read more about in the documentation.

The second is provided by RubyGems. You’ll notice on your gem’s page a “badge” link, which provides the required code and format (in this case, in Markdown format).

The third is provided by Coveralls. When you visit your repository page in the Coveralls application, you’ll see a link to “Get badge URLS”; from there, you can select the relevant format.

REPL-Driven Development

Tests and TDD are a critical part of the development process but won’t eliminate all bugs by themselves. This is where a tool such as Pry can help you to figure out how a piece of code works and the path that the code takes during a conditioned execution.

To use Pry, enter the pry command in the terminal. As long as Pry is installed and available from that directory, you’ll be dropped into a Pry session. To view all available commands, run the help command.

Testing a Local Gem Build

If you want to run the gem outside of the test suite, then you’ll want to use Pry. To do this, we’ll need to build the gem locally and then install that local build.

To build the gem, run the following command from your gem’s root directory: gem build sinderella.gemspec. This will generate a physical .gem file.

Once the gem is built and a .gem file has been created, you can install it from the local file with the following command: gem install ./sinderella-0.0.1.gem.

Notice that the built gem file includes the version number, so that you know you’re installing the right one (in case you’ve built multiple versions of the gem).

After installing the local version of the gem, you can open a Pry session and load the gem with require 'sinderella' and continue to execute your own Ruby code within Pry to test the gem as needed.

Releasing Your Gem

Once our gem has passed all of our tests and we’ve built and run it locally, we can look to release the gem to the Ruby community by pushing it to the RubyGems server.

To release our gem, we’ll use the Rake commands provided by Bundler. To view what commands are available, run rake --task. You’ll see something similar to the following output:


rake build    # Build sinderella-0.0.1.gem into the pkg directory
rake install  # Build and install sinderella-0.0.1.gem into system gems
rake release  # Create tag v0.0.1 and build and push sinderella-0.0.1.gem t...
rake spec     # Run RSpec code examples
  • rake build

    This first task does something similar to gem build sinderella.gemspec but placing the gem in a pkg (package) directory.
  • rake install

    The second task does the same as gem install ./sinderella-0.0.1.gem but saves us the extra typing.
  • rake release

    The third task is what we’re most interested at this point. It creates a tag in git, indicating the relevant version number, pulled from the version.rb file that Bundler created for us. It then builds the gem and pushes it to RubyGems.
  • rake spec

    The fourth task runs the tests using the test runner (in this case, RSpec), as defined and configured in the main Rakefile.

To release our gem, we’ll first need to make sure that the version number in the version.rb file is correct. If it is, then we’ll commit those changes and run the rake release task, which should give the following output:


❯ rake release
sinderella 0.0.1 built to pkg/sinderella-0.0.1.gem.
Tagged v0.0.1.
Pushed git commits and tags.
Pushed sinderella 0.0.1 to rubygems.org.

Now we can view the details of the gem at https://rubygems.org/gems/sinderella, and other users may access our gem in their own code simply by including require 'sinderella'.

Conclusion

Thanks to the use of Bundler, the process of creating a gem boilerplate is made a lot simpler. And thanks to the principles of TDD and REPL-driven development, we know that we have a well-tested piece of code that can be reliably shared with the Ruby community.

(al, il)


© Mark McDonnell for Smashing Magazine, 2014.

Continue reading

Read More

Involving Clients In Your Mobile Workflow


  

A lot of mobile-minded talented folks across the globe produce great work, but yet sometimes you still hear many of them complain about their relationships with their clients. They often mention feeling isolated and not truly understanding what the client really needed.

This lack of personal interaction often leads to misunderstanding, as well as less awareness of and appreciation for all your hard work. While involving clients in your mobile workflow can be challenging, really working together will make a big difference. In this article, I’ll share some important things I’ve learned about involving clients in my mobile workflow. Let’s dive into some tips and tricks that I use every day.

Work Out Your Manifesto

Projects don’t happen overnight. It usually takes a few meetings to get to know the client and to discuss collaboration. Your company’s business strategists and account managers invest a lot of time and energy in this process. While they will often seem to distance themselves from your daily work, speaking with them is a real window of opportunity. These “suits” are the first ones to meet potential clients, and they convey your company’s vision, portfolio and creative approach. They can be a great help in nurturing a more involved relationship.

A great way to approach this internal conversation is to work out a manifesto, a summary of your creative vision and beliefs. Get together with your team and discuss your existing workflow and how it could further support what you really stand for as a team. Ask the team lead to help you work it out and make the message tangible. Do this simply by making a presentation to your colleagues. But why stop there? You could design posters, flyers, even stickers for your team so that they can help you spread the word.

Design is not an afterthought,” from Little Miss Robot’s manifesto.
“Design is not an afterthought,” from Little Miss Robot’s manifesto. (Large version)

We were getting really frustrated with clients asking us to define or optimize their mobile experience, when in fact they just wanted us to make things “prettier.” The slide above helps our client service directors to detect how potential clients really think about design. If we see that they don’t value our vision or approach, then we respectfully decline to work with them. Involvement starts with finding clients who want you to work with them, instead of for them.

Don’t Miss The Kick-Off

A kick-off meeting is the perfect opportunity to raise awareness of and appreciation for your mobile workflow. Learn as much as possible about the client, and find out how they would like you to help their business. Do this simply by asking about their vision, strategy and goals. Also great is to ask what inspires them and to get insight into their competitive research and analysis. From the minute you show true interest in their business, you are changing the way they look at you. By immediately working with them, you become their partner, instead of just someone who designs and codes.

A kick-off meeting is also a great time to double-check that you are on the same page. Sometimes we forget that our creative jargon might confuse clients. Big Spaceship points this out in its inspiring manual (PDF):

“We act like humans, we talk like humans, and we think like humans. And we call out anyone who does the opposite.”

In the last two years, I’ve learned that clients find it very hip to focus on responsive design, even if they don’t clearly understand it. Too often, it leads to a discussion on size and dimensions, when the conversation should be conceptual and strategic. Reserve some time in your kick-off meeting to explain what “responsive” means and why you believe in its value. Educate the client and steer the conversation towards what is really needed to make the project better. And if you notice that a certain topic needs more time and attention, host a mini-workshop to talk it through.

Dealing With Isolation

I don’t understand why some account and project managers try to keep their team away from the client as much as possible. Granted, it makes perfect sense that they manage the client, oversee the scope, deadlines and budget, and handle the communication and next steps. But when the work is in progress, keeping the team isolated doesn’t add any value. If this happens to you, explain to the manager that getting direct feedback from the client will help you fine-tune the product better and more quickly — a win-win for everyone.

At Little Miss Robot, we try to hold half of our meetings in our studio. Clients find it inspiring to be in this creative environment — especially because it is where their own product is being developed. In long-term projects, we also ask the client to designate a space at their office for our team to work on the project. When developing Radio+, we worked at the client’s headquarters twice a week. Anyone could hop in and out of the space and have informal conversations about the work. Not only did it create a great atmosphere, but we also received the most valuable feedback during these times. Highly recommended!

0The Radio+ room, a shared workspace.
The Radio+ room, a shared workspace. (Large version)

Seeing Things

A typical project starts by the team exploring or defining what they will create. A lot of teams rely on textual aids, such as functional requirements. While these documents contain a lot of detail, I always end up having to address misinterpretations. The worst part is that these “minor” misunderstandings always pop up during the production stage, resulting in increased time and expenses. Have you noticed on these occasions that the client says they “saw” things a bit differently? This is why I recommend using text documents to scope features and using visual resources to describe them. Mind maps, wireframes, storyboards and paper prototypes are my personal favorites.

The wireframe for the Radio+ mobile website.
The wireframe for the Radio+ mobile website. (Large version)

I always encourage clients to get involved in generating these visual resources. Having them by your side during a brainstorm or a UX workshop is really helpful. While they wouldn’t consider themselves designers, I’m always challenged and inspired by their thinking and how they see things.

Feeling The Progress

Throughout the mobile development process, you will probably invite the client to several meetings to discuss the status of the project and to demo the product. Make sure you have something tangible to talk about. If a meeting is just about process, time or budget, then let the project manager handle it. Build momentum when meeting in person, and show your work in progress on real devices! Of course, you could print out the design or demo the application on a big screen, but the client should be able to feel the progress in their hands, too. Feeling a product grow in your hands is a much more powerful and engaging experience!

Some great tools exist to share designs across devices. We use AppTaster to share mockups, Dropbox to share designs and TestFlight to distribute apps to clients. If we are building a mobile website, then we just host it on the client’s servers internally, which allows them to view the latest version whenever they want.

Over-the-air beta testing for TestFlight.
Over-the-air beta testing for TestFlight. (Large version)

Happy Ending

Involving clients in your mobile workflow is the key to better understanding their problems, goals and strategies. You’ll also raise more awareness of and appreciation for your work, thus reducing negative energy and making discussions more positive and constructive. However big or small your team or client, it all starts with a desire to be involved. These take-aways can help you with that:

  1. Create a manifesto that explains what your team stands for.
  2. Hold a kick-off meeting to ask the client about their vision, strategy and goals.
  3. Use both your and their offices to meet.
  4. Scope features in text documents, and describe them in visual documents.
  5. Take advantage of third-party tools to share your work in progress on real devices.

Last but not least, read Jeremy Girard’s article on how to wrap up a project and follow up afterwards. This is critical to building and maintaining a long-term relationship. Most importantly, it will lead to future business because the client will already know and value your work.

Please feel free to share your experiences and thoughts in the comments below. I’m already looking forward to reading them.

(al, ml)


© Thomas Joos for Smashing Magazine, 2014.

Continue reading

Read More

Interview With Khajag Apelian: “Type Design Is Not Only About Drawing Letters”


  

Having started his career studying under some of the best typographic minds in the world, Khajag Apelian not only is a talented type and graphic designer, unsurprisingly, but also counts Disney as a client, as well as a number of local and not-for-profit organizations throughout the Middle East.

Even more impressive is Khajag’s willingness to take on work that most people would find too challenging. Designing a quality typeface is a daunting task when it’s only in the Latin alphabet. Khajag goes deeper still, having designed a Latin-Armenian dual-script typeface in four weights, named “Arek”, as well as an Arabic adaptation of Typotheque’s Fedra Display.

Khajag ApelianGiven his experience in working between languages, it’s only logical that Khajag’s studio maajoun was chosen by the well-known and beloved Disney to adapt its logos for films such as Planes and Aladdin into Arabic, keeping the visual feel of the originals intact.

Q: Could you please start by telling us more about some of the typefaces you’ve designed?

Khajag: Well, I’ve only designed one retail font, and that is Arek. It started as my final-year project in the Type and Media program at KABK (Royal Academy of Art, the Hague). Arek was my first original typeface, and it was in Armenian, which is why it is very dear to me. I later developed a Latin counterpart in order to make it available through Rosetta, a multi-script type foundry.

Another font I designed is Nuqat, with René Knip and Jeroen van Erp. Nuqat was part of the “Typographic Matchmaking in the City” project, initiated by the Khatt Foundation between 2008 and 2010. In this project, five teams were commissioned to explore bilingual type for usage in public spaces.


Arek is a dual-script Latin-Armenian typeface family in four weights, with matching cursive styles. (Large preview)

I’ve also worked on developing the Arabic companion of Fedra Display by Typotheque. The font is not released yet but will be soon, hopefully in the coming year, so keep an eye on Typotheque if you’re interested.

Q: How did you start designing type?

Khajag: We had a foundational course in type design at Notre Dame University in Lebanon (NDU) during my bachelor’s degree. Actually, it was more like a project within a course, where we were asked to design an “experimental Arabic typeface” — something that was quite basic and that didn’t really involve type design, which I later realized when I entered the Type and Media program. So, that was the first project I worked on that could be considered close to designing type. The outcome is nothing to be proud of, but the process was a lot of fun.

Then, I started to work more and more with letters, although I never knew I could develop this interest, let alone study it later on. I only found out about the program at KABK during my final year at the university, when NDU graduate Pascal Zoghbi came to the school to present his Type and Media thesis project. That did it for me — two years later, I was there!


Typographic Matchmaking 2.0 parts 1-3. (Watch on YouTube)

Q: Tell us about the course at KABK. Did you focus only on designing Latin typefaces, or were you able to develop your skill in designing Arabic faces, too?

Khajag: The year at KABK was one of the best times I’ve had. It was intense, rich, fun and fast. It’s incredible how much you develop when surrounded by teachers who are considered to be the top of the typographic world and classmates who were selected from different places around the world, each bringing their own knowledge and experience to the table.

During the first semester, we tackled the basics of type design in calligraphy classes, practicing and exercising the principles of Latin type. We mostly learned the fundamentals of contrast, letter structure and spacing. This continued over the year through sketching exercises, designing type for different media and screens, and historical revivals.

Sketching exercises
A couple of type-drawing exercises on TypeCooker. (Image source)

Adapting these principles to the specifics of other scripts, like Arabic and Armenian, had to come from a more personal learning effort. But despite their modest knowledge of these scripts, the instructors are capable of guiding you through your final project. At the time, I decided to go with Armenian for my final project, but others have worked with other scripts, and the results have been strong and impressive.

Q: How do you keep the spirit of a typeface intact when moving from one language to another? Is it easier to maintain this feel when designing the Latin counterpart of an Armenian typeface, as you did with Arek, or when moving from Latin to Arabic, as you’re doing with Fedra Display?

Khajag: I think each project presents its own challenges to translating a certain spirit in different scripts. In the case of Arek, I started designing the Armenian without thinking about designing a Latin counterpart to it. So, my focus was entirely on one script. The process involved a lot of investigation of old Armenian manuscripts, from which my observations and findings were translated into the typeface. This naturally created a very strong spirit that I had to retain when I moved to designing the Latin counterpart.

Armenian and Latin letter proportions and constructions have certain similarities, which helped with the initial drawing of the Latin letters. I later had to reconsider some details, like the x-height, the serifs and the terminals, in order to achieve the ideal visual harmony.


This Arabic adaptation of Fedra Display. (Large preview)

In the case of Fedra Display Arabic, the spirit of the typeface was already there. The challenge was to translate the extreme weights of Fedra Sans Display to the existing Fedra Arabic. The Latin font is designed for headlines and optimized for a compact setting. These were important to retain when designing the Arabic counterpart. I experimented a lot with the weight distribution of the letterforms, something that is an established practice in the Latin script but not in the Arabic.

I had to find the right width and the maximum height of the letterforms in order to achieve similar blackness while maintaining the same optical size. Whereas, for the hairline, it was necessary to keep the compact feature of the Latin without undermining the Arabic script. A set of ligatures was designed to further enhance the narrowness of the font.

Q: Do you design only Arabic typefaces? If so, is there a particular reason for that?

Khajag: Besides Arek and a few other small projects I’ve been involved in, I mostly work with Arabic. The first direct reason is where I live, of course. Most of the time, clients in the region need to communicate in two languages, and that’s usually Arabic and English, or Arabic and French. The other reason is the number of Arabic fonts available compared to Latin fonts. Usually, when looking to communicate in English, I can find a way to do it through existing Latin fonts, but that’s not always the case with Arabic.

Although, I have to admit that a lot of good Arabic type is emerging in the design world nowadays. Still, there aren’t that many Arabic typefaces, and with time the good ones become overused and everyone’s designs start to look similar. This is why I look to differentiate my work through type. I do not always design complete functional typefaces; rather, I often develop “incomplete” fonts that I can use to write a word or a sentence for a poster or a book cover, and different lettering pieces here and there.


The identity poster and catalogue for “Miniatures: A Month for Syria” event, organized by SHAMS. (Large preview)

Q: Do you prefer to design Arabic typefaces that hold true to the calligraphic origins of the script, or is it more interesting to depart from those origins somewhat, as you did with Nuqat?

Khajag: I think Nuqat is quite an extreme case of departing from calligraphy. I consider it an experiment rather than a functional typeface. In any case, I don’t think I have a particular preference for typefaces to design. I am very much intrigued by the process, and in both cases there are some quite interesting challenges to tackle. A big responsibility comes with designing a typeface that must remain true to its calligraphic origins, something that comes with a lot of history and that has reached a level of perfection. And when you depart from that, you go through an abstraction process that can also be a fun exercise.


Nuqat is a display typeface designed by Khajag Apelian and René Knip for the “Typographic Matchmaking in the City” project, initiated by the Khatt Foundation. (Large preview)

Q: Where does your love of typography and graphic design come from?

Khajag: Like most teenagers about to start university, I was confused about my subject of study. At the time, I was part of a dance troupe with a friend who used to be a graphic designer. I liked her quite a bit and thought I could enroll in graphic design to be “cool” like her! I didn’t know what graphic design was about at the time. And so I enrolled. The foundation year was all about color theory, shapes and composition. It wasn’t until the second year or so that I started to realize what design was really about. Luckily, I loved it.

Later on, I took courses with Yara Khoury, and thanks to her I really got to appreciate typography. Yara was heavily influenced by different European schools that put typography on a pedestal, and she managed to transfer that to me and to other students. At NDU, we were exposed to the work of various designers from the Bauhaus and Swiss schools, and we were trained to capture the details and understand the function of type within graphic design. I was particularly fascinated by how one can go all the way from designing something that goes unnoticed by the reader to something that is very present and expressive, all just with type.

Q: Did you enjoy the visual departure from the Arabic culture you were surrounded by and brought up in, into the Modernist European one you were learning about? Did you ever find the aesthetic difference between the two difficult to navigate?

Khajag: Very much, actually. It wasn’t difficult to navigate per se, but rather overwhelming, maybe? Everything in the Netherlands is designed, and many of those things are featured in books as exemplary design. I had always been exposed to this through books and the Internet, but actually being immersed in it was another experience. One funny incident was when I spotted a police car for the first time, knowing it was branded by Studio Dumbar. I was so excited, I almost wanted to take a picture with them.

Q: How did you start off in the design industry? Could you also describe your role at your current company?

Khajag: My first job as a designer was in branding with Landor Associates in Dubai. I worked there for around a year, before going to the Netherlands for my master’s. After graduation, I extended my visa for a year and worked freelance with several Dutch design studios on projects that involved designing with Arabic. My work partner, Lara, was also living and working in the Netherlands at the time, and both of our visas were about to expire. Right before coming back to Beirut, we worked together on a cultural project with Mediamatic. We got comfortable working together and thought, Why not start a studio when we get back to Beirut? And so we did.


Book cover design and guidelines for Hachette Antoine, a regional publishing house that maajoun has been working with for over three years (Large preview)

When we started, we were highly inspired by Dutch business models, such as Mediamatic and O.K. Parking, which often initiate their own cultural or educational projects and events, sometimes funded through their commercial practice. This business model was somewhat new to us at the time. Things have changed since then, and many design agencies nowadays have their own cultural or educational projects, sometimes referred to as “R&D” or “corporate social responsibility”. Far from being a corporate strategy, we like to think of our side projects as a channel to exchange knowledge with other designers in our area.

Our commercial practice, on the other hand, is focused on editorial design, lettering and type design. Our studio is rather small (most of the time, only the two of us), which means we both have to do a bit of everything, even accounting!

Q: What have been your biggest achievements till now?

Khajag: I consider maajoun to be one of my biggest achievements to date. I love what we do. When you work on fun projects in university (whether cultural or experimental), everyone tries to make you feel like you should enjoy it as much as you can because you won’t get to do much of it in the “real world.” That’s not true. At maajoun, we work on interesting projects, we take the time to experiment, and we have fun!

Publishing Arek with Rosetta would be another big achievement. Arek is the first typeface that I seriously developed, and I am really happy it is out there and available to the public.


Maajoun’s submission to GrAphorisms, a project initiated by SHS Publishing (Large preview)

Q: You’ve done some work on Arabic versions of logos for several Disney films. Are you able to share with us what that process has been like?

Khajag: Arabic logo adaptation is becoming more and more common in the Middle East and North Africa, whose markets big international brands are trying to reach. Disney is no exception. We were asked to design the Arabic versions of the logos for several Disney films, including Aladdin, The Lion King and Beauty and the Beast.

We usually start by analyzing the original logo, its visual characteristics and some distinctive shapes; most importantly, we try to extract some cultural references from the lettering technique used in the logo, whether it has a 1960s retro feel or some elegance in a classical serif. We then try to translate these both visually and conceptually to the Arabic. This helps us to create a logo that works well visually with its Latin counterpart, without compromising the essence of the Arabic script.


Maajoun’s adaptation of Disney logos to Arabic script (Large preview)


The Arabic adaptation for Disney’s Tangled (Large preview)

Q: Is there a way for readers to know what conferences you’ll be speaking at or attending, or workshops you’ll be organizing?

Khajag: Kristyan Sarkis, Lara and I decided a few months ago to start a series of Arabic lettering workshops, which we’ll try to carry to different cities every now and then. We started during Beirut’s Design Week in June 2013 and had another session in July. We are having another one around May in Beirut, so those who are interested can stay tuned to our Facebook page.

Also, the Khatt Foundation usually organizes a workshop on Arabic type design at Tashkeel in Dubai. I usually take part in this. It’s an intensive nine-day workshop. The first three days concentrate on Arabic calligraphy and lettering, while the next six days are on Arabic type design. I also usually announce these things through Twitter (@debakir and @maajoun) or through maajoun’s page on Facebook.

Q: What advice would you give to young readers out there who are interested in becoming a type designer?

Khajag: Go for it! But know that type design is not only about drawing letters. It involves research and a lot of technical work.

Related Resources

(il, al)


© Alexander Charchar for Smashing Magazine, 2014.

Continue reading

Read More

“Type Design Is Not Only About Drawing Letters”

Having started his career studying under some of the best typographic minds in the world, Khajag Apelian not only is a talented type and graphic designer, unsurprisingly, but also counts Disney as a client, as well as a number of local and not-for-profit organizations throughout the Middle East.

Type Design Is Not Only About Drawing Letters

Even more impressive is Khajag’s willingness to take on work that most people would find too challenging. Designing a quality typeface is a daunting task when it’s only in the Latin alphabet. Khajag goes deeper still, having designed a Latin-Armenian dual-script typeface in four weights, named “Arek”, as well as an Arabic adaptation of Typotheque’s Fedra Display.

The post “Type Design Is Not Only About Drawing Letters” appeared first on Smashing Magazine.

Continue reading

Read More

Frizz-Free JavaScript With ConditionerJS


  

Setting up JavaScript-based functionality to work across multiple devices can be tricky. When is the right time to load which script? Do your media queries matches tests, your geolocation popups tests and your viewport orientation tests provide the best possible results for your website? ConditionerJS will help you combine all of this contextual information to pinpoint the right moment to load the functionality you need.

Before we jump into the ConditionerJS demo, let’s quickly take a look at the Web and how it’s changing, because it’s this change that drove the development of ConditionerJS in the first place. In the meantime, think of it as a shampoo but also as an orchestra conductor; instead of giving cues to musicians, ConditionerJS tells your JavaScript when to act up and when to tune down a bit.

Applying conditioner to guinea pigs results in very smooth fur.
As you can clearly see, applying conditioner to guinea pigs results in very smooth fur. Take a moment and imagine what this could mean for your codebase.

The Origin Of ConditionerJS

You know, obviously, that the way we access the Web has changed a lot in the last couple of years. We no longer rely solely on our desktop computers to navigate the Web. Rather, we use a wide and quickly growing array of devices to get our daily dose of information. With the device landscape going all fuzzy, the time of building fixed width desktop sites has definitely come to an end. The fixed canvas is breaking apart around us and needs to be replaced with something flexible — maybe even something organic.

What’s a Web Developer to Do?

Interestingly enough, most of the time, our content already is flexible. The styles, visuals and interaction patterns classically are rigid and are what create challenging to downright impossible situations. Turns out HTML (the contents container) has always been perfectly suited for a broad device landscape; the way we present it is what’s causing us headaches.

We should be striving to present our content, cross-device, in the best possible way. But let’s be honest, this “best possible way” is not three-width based static views, one for each familiar device group. That’s just a knee-jerk reaction, where we try to hang on to our old habits.

The device landscape is too broad and is changing too fast to be captured in groups. Right this moment people are making phone calls holding tablets to their heads while others are playing Grand Theft Auto on their phones until their fingers bleed. There’s phones that are tablets and tablets that are phones, there’s no way to determine where a phone ends a tablet starts and what device might fall in between, so let’s not even try.

Grouping devices is like grouping guinea pigs.
Grouping devices is like grouping guinea pigs; while it’s certainly possible, eventually you will run into trouble.

To determine the perfect presentation and interaction patterns for each device, we need more granularity than device groups can give us. We can achieve a sufficient level of detail by looking at contextual information and measuring how it changes over time.

Context On The Web

The Free Dictionary defines “context” as follows:

“The circumstances in which an event occurs; a setting.”

The user’s context contains information about the environment in which that user is interacting with your functionality. Unlike feature detection, context is not static. You could be rotating your device right now, which would change the context in which you’re reading this article.

Measuring context is not only about testing hardware features and changes (such as viewport size and connection speed). Context can (and is) also influenced by the user’s actions. For instance, by now you’ve scrolled down this article a bit and might have moved your mouse a few pixels. This tells us something about the way you are interacting with the page. Collecting and combining all of this information will create a detailed picture of the context in which you’re currently reading this content.

Correctly measuring and responding to changes in context will enable us to present the right content in the right way at the right moment.

Note: If you’re interested in a more detailed analysis of context, I advise you to read Designing With Context by Cennydd Bowles.

Where And How To Measure Changes In Context

Measuring changes in context can easily be done by adding various tests to your JavaScript modules. You could, for example, listen to the resize and scroll events on the window or, a bit more advanced, watch for media query changes.

Let’s set up a small Google Maps module together. Because the map will feature urban areas and contain a lot of information, it should render only on viewports wider than 700 pixels. On smaller screens, we’ll show a link to Google Maps. We’ll write a bit of code to measure the window’s width to determine whether the window is wide enough to activate the map; if not, then no map. Perfect! What’s for dinner?

Don’t order that pizza just yet!

Your client has just called and would like to duplicate the map on another page. On this page, the map will show a less crowded area of the planet and so could still be rendered on viewports narrower than 700 pixels.

You could add another test to the map module, perhaps basing your measurement of width on some className? But what happens if a third condition is introduced, and a fourth. No pizza for you any time soon.

Clearly, measuring the available screen space is not this module’s main concern; the module should instead mostly be blowing the user’s mind with fantastic interaction patterns and dazzling maps.

This is where Conditioner comes into play. ConditionerJS will keep an eye on context-related parameters (such as window width) so that you can keep all of those measurements out of your modules. Specify the environment-related conditions for your module, and Conditioner will load your module once these conditions have been met. This separation of concerns will make your modules more flexible, reusable and maintainable — all favorable characteristics of code.

Setting Up A Conditioner Module

We’ll start with an HTML snippet to illustrate how a typical module would be loaded using Conditioner. Next, we’ll look at what’s happening under the hood.


<a href="http://maps.google.com/?ll=51.741,3.822"
   data-module="ui/Map"
   data-conditions="media:{(min-width:30em)} and element:{seen}"> … </a>

Codepen Example #1

We’re binding our map module using data attributes instead of classes, which makes it easier to spot where each module will be loaded. Also, binding functionality becomes a breeze. In the previous example, the map would load only if the media query (min-width:30em) is matched and the anchor tag has been seen by the user. Fantastic! How does this black magic work? Time to pop open the hood.

See the Pen ConditionerJS – Binding and Loading a Map Module by Rik Schennink (@rikschennink) on CodePen.

A Rundown of Conditioner’s Inner Workings

The following is a rundown of what happens when the DOM has finished loading. Don’t worry — it ain’t rocket surgery.

  1. Conditioner first queries the DOM for nodes that have the data-module attribute. A simple querySelectorAll does the trick.
  2. For each match, it tests whether the conditions set in the data-conditions attribute have been met. In the case of our map, it will test whether the media query has been matched and whether the element has scrolled into view (i.e. is seen by the user). Actually, this part could be considered rocket surgery.
  3. If the conditions are met, then Conditioner will fetch the referenced module using RequireJS; that would be the ui/Map module. We use RequireJS because writing our own module loader would be madness — I’ve tried.
  4. Once the module has loaded, Conditioner initializes the module at the given location in the DOM. Depending on the type of module, Conditioner will call the constructor or a predefined load method.
  5. Presto! Your module takes it from there and starts up its map routine thingies.

After the initial page setup has been done, Conditioner does not stop measuring conditions. If they don’t match at first but are matched later on, perhaps the user decides to resize the window, Conditioner will still load the module. Also, if conditions suddenly become unsuitable the module will automatically be unloaded. This dynamic loading and unloading of modules will turn your static Web page into a living, growing, adaptable organism.

Available Tests And How To Use These In Expressions

Conditioner comes with basic set of tests that are all modules in themselves.

  • “media” query and supported
  • “element” min-width, max-width and seen
  • “window” min-width and max-width
  • “pointer” available

You could also write your own tests, doing all sorts of interesting stuff. For example, you could use this cookie consent test to load certain functionality only if the user has allowed you to write cookies. Also, what about unloading hefty modules if the battery falls below a certain level. Both possible. You could combine all of these tests in Conditioner’s expression language. You’ve seen this in the map tests, where we combined the seen test with the media test.


media:{(min-width:30em)} and element:{seen}

Combine parenthesis with the logical operators and, or and not to quickly create complex but still human-readable conditions.

Passing Configuration Options To Your Modules

To make your modules more flexible and suitable for different projects, allow for the configuration of specific parts of your modules — think of an API key for your Google Maps service, or stuff like button labels and URLs.

Configuring guinea pig facial expression using configuration objects.
Configuring guinea pig facial expression using configuration objects.

Conditioner gives you two ways to pass configuration options to your modules: page- and node-level options. On initialization of your module, it will automatically merge these two option levels and pass the resulting configuration to your module.

Setting Default Module Options

Defining a base options property on your module and setting the default options object are a good start, as in the following example. This way, some sort of default configuration is always available.


// Map module (based on AMD module pattern)
define(function(){

    // constructor
    // element: the node that the module is attached to
    // options: the merged options object
    var exports = function Map(element,options) {

    }

    // default options
    exports.options = {
        zoom:5,
        key:null
    }

    return exports;
});

By default, the map is set to zoom level 5 and has no API key. An API key is not something you’d want as a default setting because it’s kinda personal.

Defining Page-Wide Module Options

Page-level options are useful for overriding options for all modules on a page. This is useful for something like locale-related settings. You could define page-level options using the setOptions method that is available on the conditioner object, or you could pass them directly to the init method.


// Set default module options
conditioner.setOptions({
    modules:{
        'ui/Map':{
            options:{
                zoom:10,
                key:'012345ABCDEF'
            }
        }
    }
});

// Initialize Conditioner
conditioner.init();

In this case, we’ve set a default API key and increased the default zoom level to 10 for all maps on the page.

Overriding Options for a Particular Node

To alter options for one particular node on the page, use node-level options.


<a href="http://maps.google.com/?ll=51.741,3.822"
   data-module="ui/Map"
   data-options='{"zoom":15}'> … </a>

Codepen Example #2

For this single map, the zoom level will end up as 15. The API key will remain 012345ABCDEF because that’s what we set it to in the page-level options.

See the Pen ConditionerJS – Loading the Map Module and Passing Options by Rik Schennink (@rikschennink) on CodePen.

Note that the options are in JSON string format; therefore, the double quotes on the data-options attribute have been replaced by single quotes. Of course, you could also use double quotes and escape the double quotes in the JSON string.

Optimizing Your Build To Maximize Performance

As we discussed earlier, Conditioner relies on RequireJS to load modules. With your modules carefully divided into various JavaScript files, one file per module, Conditioner can now load each of your modules separately. This means that your modules will be sent over the line and parsed only once they’re required to be shown to the user.

To maximize performance (and minimize HTTP requests), merge core modules together into one package using the RequireJS Optimizer. The resulting minimized core package can then be dynamically enhanced with modules based on the state of the user’s active context.

Carefully balance what is contained in the core package and what’s loaded dynamically. Most of the time, you won’t want to include the more exotic modules or the very context-specific modules in your core package.

Try to keep your request count to a minimum — your users are known to be impatient.
Try to keep your request count to a minimum — your users are known to be impatient.

Keep in mind that the more modules you activate on page load, the greater the impact on the CPU and the longer the page will take to appear. On the other hand, loading scripts conditionally will increase the CPU load needed to measure context and will add additional requests; also, this could affect page-redrawing cycles later on. There’s no silver bullet here; you’ll have to determine the best approach for each website.

The Future Of Conditioner

A lot more functionality is contained in the library than we’ve discussed so far. Going into detail would require more in-depth code samples, but because the API is still changing, the samples would not stay up to date for long. Therefore, the focus of this article has been on the concept of the framework and its basic implementation.

I’m looking for people to critically comment on the concept, to test Conditioner‘s performance and, of course, to contribute, so that we can build something together that will take the Web further!

(al, ml, il)


© Rik Schennink for Smashing Magazine, 2014.

Continue reading

Read More
Page 1 of 2412345»1020...Last »

Contact Us

Nebraska Digital
2124 Y St., Suite 108
Lincoln, NE 68503
402.476.3438
info@nebraskadigital.com