Automagically load your Gulp plugins

When I first started using gulp I felt that the most annoying thing about it all was that I had to manually require  all my plugins. So on a large project I would get 20 lines of requiring plugins. Soon I was looking for a solution that would allow me to include plugins automatically and thankfully I found one that's extremely easy to use. It's called gulp-load-plugins.

Using gulp-load-plugins

In order to use gulp-load-plugins you must first install it through npm. Open up a terminal window and type npm install --save-dev gulp-load-plugins if it fails due to permission errors you might have to run the command as sudo . When the plugins is installed you can use it inside your gulpfile like this:

var gulp = require('gulp');
var plugins = require('gulp-load-plugins')();

gulp.task('scripts', function() {
    gulp.src("/scripts/src/*.js")
        .pipe(plugins.plumber())
        .pipe(plugins.concat('app.js'))
        .pipe(plugins.uglify())
        .pipe(gulp.dest("/scripts/"))
});

On line 2 gulp-load-plugins is being included. Note that we immediately call  this module as well by adding parentheses after requiring it. When you call the plugin it looks through your node_modules  and it adds every module that starts with gulp- to itself as a callable function. So if you have gulp-uglify installed you can just call plugins.uglify .

When you have a gulp plugin that has a name with dashes in it, like gulp-minify-css . The loader will add it as minifyCss . In other words it will camelcase the plugin names. Well, that's it. This gulp plugin really helped my gulp workflow and I hope it will help you as well.

Expand your learning with my books

Practical Swift Concurrency header image

Learn everything you need to know about Swift Concurrency and how you can use it in your projects with Practical Swift Concurrency. It contains:

  • Eleven chapters worth of content.
  • Sample projects that use the code shown in the chapters.
  • Free updates for future iOS versions.

The book is available as a digital download for just $39.99!

Learn more

Stop writing vendor prefixes, autoprefixer does that for you

Anybody who writes css for the modern web has probably touched vendor prefixes at some point in time. These prefixes are required to get the most out of browsers that are supporting bleeding edge properties in ways that aren't yet part of the css3 spec. When you’re writing these vendor prefixes it’s easy to forget one or two, or maybe you add one that isn't actually required for the browsers that you’re supporting. Of course you can always check out caniuse.com to check the prefixes you need for your use case but that’s something you don’t want to do on a daily basis.

And even if you knew all the prefixes you probably wouldn't know the syntax for each property. For example, take flexbox. Flexbox uses several implementations across several browsers and writing a full stack of vendor prefixes for it would look a little bit like this:

.flexbox {
    display: -webkit-box;
    display: -webkit-flex; /* two webkit versions!?!?!? */
    display: -ms-flexbox;
    display: flex; 
}
 
.flexbox .flexitem {
    -webkit-flex-shrink: 0;
    -ms-flex-negative: 0;
    flex-shrink: 0; 
}

Using autoprofixer you can just write this and it will output the above prefixes for you:

.flexbox {
    display: flex;
}
 
.flexbox .flexitem {
    flex-shrink: 0;
}

Much better, right?

How does it work?

Autoprefixer uses caniuse.com to figure out what prefixes it should use. When you use autoprefixer you can always define what browser you want to support. You can define the browsers in a very natural way, for example you could pass it > 5%  which tells it to only support browsers that have more than 5% share in the market.

It then scans your entire css for properties that require prefixes and it automatically adds them to your css. Simple enough, right?

Setting this up for yourself

I will use Gulp to set up autoprefixer. If you’re unfamiliar with Gulp, I’ve written a post on getting started with it. I recommend you read that before you continue. If you’ve used Gulp before, grab your gulpfile and open a terminal window.

First of all we should install the gulp-autoprefixer plugin. Do this by typing npm install —save-dev gulp-autoprefixer . If you get permission errors you might have to run this command as sudo (sudo npm install —save-dev gulp-autoprefixer). Now that we have that set up we can add gulp-autoprefixer to our css task like this:

var gulp = require(‘gulp’);
var sass = require(‘gulp-sass’);
var autoprefixer = require(‘gulp-autoprefixer’);
 
gulp.task('css', function(){
    gulp.src('src/sass/**/*.scss')
        .pipe(sass())
        // the important lines
        .pipe(autoprefixer({
            browsers: '> 5%'
        })) 
        .pipe(gulp.dest('contents/css/'));
});

If you already use gulp only lines 9-11 will be important to you. What’s going on here is just a regular css compile task. We take all our .scss files and stream them to the sass plugin so they will be compiled. After doing that we stream the compiled css to the autoprefixer which will add all the vendor prefixes we need. Then the output gets saved and you’re done, you can write prefix-free css now!

A note about the above code

After publishing this I received a note from Andrey Sitnik, the author of Autoprefixer, saying that gulp-postcss should be used over gulp-autoprefixer. Below is an example of using autoprefixer with gulp-postcss. To use this you should first install both packages: npm install --save-dev gulp-postcss autoprefixer-core .

Example code:

var gulp = require(‘gulp’);
var sass = require(‘gulp-sass’);
var autoprefixer = require('autoprefixer-core'
var postcss = require(‘gulp-postcss’);
 
gulp.task('css', function(){
    gulp.src('src/sass/**/*.scss')
        .pipe(sass())
        // the important lines
        .pipe(postcss([
            autoprefixer({
                browsers: '> 5%'
            })
        ])) 
        .pipe(gulp.dest('contents/css/'));
});

The nice thing about using postcss is that you can combine multiple postprocessors. That makes it more powerful that just using gulp-autoprefixer.

How about Bourbon?

Bourbon is a sass mixing library that has many features, one of which is prefixing css. It does this by providing mixins to you that will prefix the css you pass it. An example:

a {
    @include transition(color, 200ms);
 
    &:hover {
        color: blue;
    }
}

When you compile your sass this will output properly prefixed css. The big plus for this is that you have one plugin less in your gulp file because sass (with some help of bourbon) is now prefixing your css. However, you still have to remember what properties use prefixes. That in itself isn’t bad, it’s good to know what you’re working with but it’s also easy to forget to use the mixin and end up with forgotten prefixes.

Wrapping up

In this post I've explained to you, briefly, why you shouldn't be writing your own vendor prefixes. It's easy to forget some and they're high maintenance. I also explained to you that a gulp plugin called gulp-autoprefixer can make your live a lot easier by automagically prefixing css rules that require a vendor prefix.

I also showed you really quick that you can use Bourbon to achieve effectively the same thing but it's a little bit more high maintenance than using autoprefixer.

The main point is, you don't have to write prefixes yourself. There are two very nice tools out there and each has it's own pro's and con's. I personally prefer autoprefixer because it uses caniuse.com to figure out what it should prefix and I can have 0 worries about remembering what properties need prefixing.

You should start using Browsersync today.

Seriously, you should. Browsersync is a great tool that allows you to sync your browser on multiple screens. This might not sound that impressive, but in reality it is. It's so impressive that I felt like I needed to make a .gif for you because you otherwise might not get how awesome Browsersync is.

bs_demo

The above image shows four different browsers, all of them are acting in sync. I am only interacting with the browser on the top right, the other three are automatically updating through Browsersync.

Why should you use it?

Personally, I held off trying Browsersync for quite some time. But when I installed it last Friday I was absolutely amazed, it was so easy and it's such a great tool to test multiple browsers. Even mobile browsers play nice with Browsersync. Integrating it with gulp was extremely simple as well so I can refresh all connected browsers whenever my css or javascript gets compiled. Need more than reloading and syncing? Well, there's more to Browsersync than those two features.

Remote debugging

When you're working on a Linux machine, like I am, and somebody walks up to you with their iPhone and show you a bug it can be a real pain in the arse. Debugging Safari on an iPhone without having a Mac nearby is damn near impossible. With Browsersync it's as easy as connecting the iPhone to Browsersync and you can remotely inspect the DOM, console and more. Isn't that awesome? If that's not cool I don't know what is.

Setting this up for yourself

Let's set up Browsersync for whatever project you're working on right now, shall we? First install Browsersync through npm:

npm install browser-sync --save-dev

Make sure you have gulp installed as well, if you've never used gulp before then you might want to check out my post on getting started with gulp.

In your gulpfile make sure to include Browsersync.

var browserSync = require('browser-sync');

Let's say we want to start Browsersync whenever we start our gulp watch task. And then whenever our css changes we want to refresh the browser. Doing this is actually incredibly simple. Here's the code:

gulp.task('watch', function(){
    browserSync({
        proxy: 'http://localhost:5000'
    });

    gulp.watch({glob: 'scss/**/*.scss'}, ['css', browserSync.reload]);
});

The above code is creating a watch task on line one. Then on line 3-5 we start Browsersync. The only option I'm passing to it is a proxy. In this example I assume you have a current development setup where you're running your app server in a way you're used to. The project I tried Browsersync with is a Python app, so using a proxy is the best way for my own use case. You can also opt for running Browsersync on a folder, it's up to you really. Check out the docs for more options.

When the css changes we start the css task. The next task we pass is the browserSync.reload function. When this function is called all connected browsers will reload. And that's all, you're using Browsersync.

When you start your watch task by typing gulp watch , Browsersync will tell you the url for your project in the console. It also tells you a UI url. This url is where you'll find some great features and debugging options. You should try to explore them a little, they're really nice.

Moving forward

Now that you have Browsersync set up you can start exploring it's great features. The documentation for Browsersync is really good and goes in depth about all the options you can pass to it. The debugger is really powerful and deserves to be used once you start checking everything out. Hopefully you'll enjoy using Browsersync and the way it can speeds up your development workflow.

If you think I've missed anything, if you have questions or feedback, you can find me on Twitter.

Why I love to write stories while I’m programming

When you program something that involves complex logic it's very easy to jump in head first and start typing away. The first couple of projects I did started like this, they weren't very carefully planned out which meant that requirements would change all the time. This often resulted in buggy code and forgotten features.

When I started working with other programmers who were young and inexperienced like I was, I noticed that many of them struggled with the same problems. We would encounter weird bugs all the time and we'd curse those silly users for wanted strange things. The things users want are, in fact, seldom strange. Users might not be able to articulate what they want but more often then not their wishes are genuine and real.

Starting to define projects

When I learned more about programming, I learned that planning and defining requirements is extremely important. So whenever I started working on something I'd start thinking about it. I thought real hard for several minutes before I started to model my data. And then I thought some more and remodeled my data. And I'd think about features, and then I would build them. And I felt great, everything seemed so organized because I actually thought it through.

But then those pesky users came along. Still finding bugs. Still missing features. And I would have to go back to the drawing board, remodel data, break everything I had built just to allow a user to edit something they did before. At this point I had realized that most features users complained about were actually good features. For example, editing something. Users make mistakes, they should be able to correct them. It seemed so logical to me, the system asks you "Are you sure?" and the user responds with a "Yes". The user was sure so why would they need a way to undo it, or edit it. Well, we all make mistakes and a system should be kind to it's users.

Write it down

In order to not forget features I would write down lists at the beginning of a project. Often these lists would be written down really quick and I'd use pen and paper for this. The list would contain features, data flows and they would be written from the perspective of the application. This helped me a lot but it still wasn't perfect. I would still forget about users in a way that was very annoying for myself. And the people that used the things I built were probably a bit annoyed as well.

So, when I got some bug reports in I would quickly write them down, redefine what I was going to do and off I went, fixing bugs and making the world a nicer place. But you can probably already guess. I'd still forget things. I sometimes shared my lists with others to make sure I didn't forget anything. Usually I didn't seem to have forgotten anything until we started some real world testing. So, what went wrong here? I had lists, people gave feedback, all should be good right?

The user story

It wasn't until I started working on a very large, internal project at Rocket Science Studios that I learned how to properly define features. One of the senior developers wanted to approach the big project with user stories. These stories would outline how a user interacts with the application. The stories were never very long and they would never take into account how the system works. So no mention of AJAX or MySQL, just user related actions. A user story we wrote would look like: "When I add a new item to the canvas I want it to be placed in the center automatically". Or "If I move something around I want it to be saved automatically". Or, one more example "When I lose my internet connection I don't want to lose my changes if the browser window closes".

When you have user stories you start working with something real. You start thinking like a user and you start thinking about how a user actually interacts with your application. And when you are in that mindset you probably become more critical of your own work. When a user wants something to happen they probably want it to happen smoothly. They probably don't care about technical details, they just want to do things.

In conclusion

In the years that I have been learning how to build web applications I tried several approaches to building features. I've learned that doing things quickly seldom is a good idea. An even worse idea is to keep all the information in your head. You're probably way too busy to actually remember everything, sometimes over the course of several days. Another thing I've learned is that users don't care about details. A user wants to do something and they want it to be smooth, it's our job to make that happen.

A tool I enjoy using is the user story. This tool is a simple, effective tool that forces you to think from a user perspective rather than an application perspective. This forces you to carefully plan and think about features in a more user centered way. And this often results in making things smoother, more logical and less buggy. So, my advice to anyone out there reading this is to try user stories sometime. Maybe together with your whole team, maybe just for your own features. And chances are that you will write better applications.

How to prevent Gulp from crashing all the time

When you're working with Sass and use Gulp to compile your  .scss files it can happen that you introduce an error by accident. You misspell one of your mixins or variables and gulp-sass will throw an error. When this happens your Gulp task crashes and you have to restart it. When I started using Gulp this drove me crazy and I really wanted a solution for this problem.

Gulp-plumber

The solution I found is called gulp-plumber. This nifty plugin catches stream errors from other plugins and allows you to handle them with a custom handler. You can also let gulp-plumber handle the errors, it outputs them to your console by default. So when a plugin encounters an error you will be notified of the error and you can easily fix it without having to restart your Gulp task.

Example code

gulp.task('scss', function(){
    gulp.src('/**/*.scss')
        .pipe(plumber())
        .pipe(sass())
        .pipe(gulp.dest('/css/'));
});

The above example takes all scss files as input and then 'activates' gulp-plumber. After doing this you can use gulp like you are used to. Whenever an error occurs after activating plumber it will be caught and handled so gulp doesn't crash on you. That's it, hopefully this is helpful to you. If you have questions about this quick tutorial you can find me on Twitter.

How I improved my workflow with Imagemagick

When working with assets you will often want to change some of them. My personal experience is that I often want to resize images, stitch them together, blur them or convert them from .png to .jpg. When I had to do this I usually sighed, fired up Photoshop, created a batch and then ran everything in my folder through that batch. When I realized I got it wrong I would have to do this again and Photoshop usually crashed at least once in the process as well. Needless to say, I did not enjoy adjusting my assets. Until I didn't have Photoshop anymore...

Changing workstations

In december I switched from an OS X oriented workplace to an Ubuntu oriented workspace which meant that I didn't get to have Photoshop on my machine (without jumping through hoops). This wasn't a problem because I didn't have to work with assets as much as I used to and when I did I would just ask somebody else to do the tedious tasks for me.

But then I remembered Imagemagick. In the past I've used their PHP library to resize images on my webserver, but under the hood that actually uses the Imagemagick command line tool. And since I was working with a Linux machine I figured I could install Imagemagick and use it. And so I did. And it's beautiful.

Start using Imagemagick

Before you can use Imagemagick you have to install it, go to their install page and find the appropriate version for your machine. Surprise, it turns out they have an OS X version as well. After you've installed Imagemagick you should prepare some images to resize.

Resizing images

Resizing images is surprisingly simple. Open up a command line, navigate to the folder where you have your image and type this into your command line.

convert myimage.jpg -resize 50% myimage_half_size.jpg

This will scale your image down to 50% of it's original size. If you have several images to do this with you might run something like:

for "x" in *.jpg; do convert $x -resize 50% half_size_$x; done;

Pretty simple, right? You can also scale images proportionally to a pixel value like this:

convert myimage.jpg -resize 250x myimage_250.jpg

This will make the image 250px wide and scale the height proportionally. Scaling an image based on height is similar but place the pixel value after the x:

convert myimage.jpg -resize x250 myimage_250h.jpg

Manipulating image quality

After resizing your images you might want to optimize their quality a bit as well. I usually get extremely high quality assets and they can be as large as 2-3Mb sometimes. After resizing they might already be around the 100Kb, but giving up a little quality can take them down to 60-70Kb. Here's the command to do that:

mogrify -quality 60 myimage_lq.jpg

Instead of convert I used mogrify for this snippet. Mogrify is used for in-place manipulating of images, so it overwrites your image. If you don't want this you can just substitute it with convert and use it just like you did earlier.

Wrapping it up

With the commands you've learned in this post it should be easy enough to optimize and manipulate your images in the command line. For more examples check out the usage section on the Imagemagick website. I personally feel like using Imagemagick has improved my workflow and I hope it can (and will) do the same for you.

Getting started with Gulp

Let's talk about tools first

Lately I've noticed that many Front-end developers get caught up in tools. Should they use Less, or Sass? Do they code in Sublime or Atom? Or should they pick up Coda 2? What frameworks should they learn? Is Angular worth their time? Or maybe they should go all out on Ember or Backbone.

And then any developer will reach that point where they want to actually compile their css, concatenate their javascript and minify them all. So then there is the question, what do I use for that? Well, there's many options for that as well! Some prefer Grunt, others pick Gulp and some others turn to Codekit for this. Personally I got started with Codekit and swapped that out for Gulp later on. The reason I chose Gulp over Grunt? I liked the syntax better and people said it was exponentially faster than Grunt.

What is Gulp

After dropping all these tools in the previous paragraphs you might be confused. This article will focus on getting you started with Gulp. But what is Gulp exactly?

Gulp is a task runner, it's purpose is to perform repetitive tasks for you. When you are developing a website some things you might want to automate are:

  • Compiling css
  • Concatenate javascript files into a single file
  • Refresh your browser
  • Minify your css and javascript files

This would be a very basic setup. Some more advanced task include:

  • Running jshint
  • Generating an icon font
  • Using a cachebuster
  • Generating spritesheets

And many, many more. The way Gulp does all these things is through plugins. This makes Gulp a very small and lightweight utility that can be expanded with plugins so it does whatever you want. At this time there's about 1200 plugins available for gulp.

When should I learn Gulp

You should learn Gulp whenever you feel like you are ready for it. When you're starting out as a front-end developer you will have to learn so many things at once I don't think there's much use in immediately jumping in to use Sass, Gulp and all the other great tools that are available. If you feel like you want to learn Sass (or Less) as soon as you start learning css you don't have to learn Gulp as well. You could manually compile your css files as you go. Or you could use a graphical tool like Codekit for the time being. Like I said, there's no need to rush in and confuse yourself.

What about Grunt?

For those who payed attention, I mentioned Grunt at the start of this article. I don't use Grunt for a very simple reason, I don't like the syntax. And also, Gulp is supposed to be faster so that's nice. I feel like it's important to mention this because if you choose to use a tool like Gulp or Grunt you have to feel comfortable. A tool should help you get your job done better; using a tool shouldn't be a goal in itself.

That said, Grunt fulfills the same purpose as Gulp, which is running automated tasks. The way they do it is completely different though. Grunt uses a configuration file for it's tasks and then it runs all of the tasks in sequence. So it might take a Sass file and turn it into compiled css. After that it takes a css file and minifies it. Gulp uses javascript and streams. This means that you would take a Sass file and Gulp turns it into a so called stream. This stream then gets passed on an plugins modify it. You could compare this with an assembly line, you have some input, it get's modified n times and then at the end you get the output. In Gulp's case the output is a compiled and minified css file. Now, let's dive in and create our first gulpfile.js!

Creating a gulpfile

The gulpfile is your main file for running and using gulp. Let's create your first Gulp task!

Installing gulp

I'm going to assume you've already installed node and npm on your machine. If not go to the node.js website and follow the instructions over there. When you're done you can come back here to follow along.

To use gulp you need to install it on your machine globally and locally. Let's do the global install first:

npm install -g gulp

If this doesn't work at once you might have to use the sudo prefix to execute this command.

When this is done, you have successfully installed Gulp on your machine! Now let's do something with it.

Setting up a project

In this example we are going to use gulp for compiling sass to css. Create the following folders/files:

my_first_gulp/
    sass/
        style.scss
    css/
    gulpfile.js

In your style.scss files you could write the following contents:

$background-color: #b4da55

body {
    background-color: $background-color;
}

Allright, we have our setup complete. Let's set up Gulp for our project.

Setting up Gulp

Let's start by locally installing gulp for our project:

npm install gulp

If you're more familiar with npm and the package.json file that you can add to your project you'll want to run the command with --save-dev appended to it. That will add gulp as a dependency for your project. This also applies to any plugins you install.

Next up, we install the sass compiler:

npm install gulp-sass

We're all set to actually build our gulpfile now. Add the following code in there:

var gulp = require('gulp');
var sass = require('gulp-sass');

gulp.task('css', function(){
    gulp.src('sass/style.scss')
        .pipe(sass())
        .pipe(gulp.dest('css/'));
});

gulp.task('watch', function(){
    gulp.watch('sass/**/*.scss', ['css']);
});

In this file we import gulp and the gulp sass module. Then we define a task, we call the task css. Inside of this task we take our style.scss file and pipe it through to the sass plugin. What happens here is that the sass plugin modifies the stream that was created when we loaded sass/style.scss. When the sass plugin is done it sends the stream on to the next pipe call. This time we call gulp.dest to save the output of our stream to the css folder.

Think of this process as an assembly line for your files. You put the raw file in on one end, apply some plugins to it in order to build your final file and then at the end you save your output to a file.

The second task we define is called watch. This is the task we're going to start from the command line in a bit. What happens in this task is that we tell gulp to watch everything we have in our sass folder, as long as it ends in .scss. When something changes, Gulp will start the css task.

Go back to your command line and type:

gulp watch

Now go to your style.scss file, change something and save it (or just save it). You should see something happening in the command line now.

[20:54:17] Starting 'css'...
[20:54:17] Finished 'css' after 3.63 ms

That's the output I get. If you check the css folder, there should be a css file in there. You just generated that file! That's all there is to it.

Moving forward

Now that you've learned to compile sass files into css files with gulp you can start doing more advanced things. Gulps powerful syntax shouldn't make it too hard to expand on this example. Usually you just add an extra pipe to your file, call the plugin you want, repeat that a few times and then eventually you save the output file.

I hope this guide to getting started with gulp is helpful to you. If you have notes for me, questions or just want to chat you can send me a tweet @donnywals

Weekly Swift 3, Interfaces and CoreData

This is the third post I'm writing about my swift adventure and it's been great so far. I feel like I've been able to learn a lot about Swift and UIKit. I did miss two days because I was extremely busy those days, so that's a bit of a shame.

In week three I focused on learning UI stuff rather than focusing on building Arto App, I decided to do this because a better understanding of UIKit might be very important in the process of developing it. I also took a peek at CoreData which was interesting.

UIScrollView

UIScrollView 1

I used the UIScrollView this week for making a zoomable/scrollable image and a slideshow. This was useful because I got to learn that a tableview actually subclasses the scrollview. It's also pretty easy to get started with a scrollview and it seems like an awesome way for creating good user experiences. It just feels good to use, especially because it's so easy to implement.

UIScrollView 2

Sidescrolling menu

Many apps today use a navigation pattern where there's a menu hidden underneath the content of a page. Using a great tutorial from raywenderlich.com, I was able to create one of these menus and again I was amazed at how flexible and natural a lot of programming in UIKit feels. I'm only scratching the surface so there's a lot more to be explored and probably there's many things that aren't as easy and straightforward as what I've done so far.

Slide out menu

CoreData

If you need a database to store objects in, then CoreData is what you need. This week I have used CoreData to save a list of names and birth dates. CoreData uses a database model that you can query. You access the models from CoreData through an NSManagedObject. This object is obtained through the AppDelegate which seems strange but it's just how it works.

Actually using CoreData for something simple as this was not too hard and quite easy to understand. In the CoreData example I did, I actually found that using user input is very strange actually. When a user selects a text input field a keyboard pops up, which makes sense. But when the user taps outside of the text input, essentially taking the focus away from it, you'd expect the keyboard to disappear. But it doesn't, because you have to manually tell the keyboard to stop responding.

No more daily projects

Starting this week I will finish one thing a day anymore. I do intent to push something to github every day but it's my goal to make more useful things that are hard to squeeze into a single day. So my intent is to make the daily Swift something I do every other day. That way I can try to make more interesting projects that are a bit more complex than the ones I'm doing right now.

This week's repositories

As always, if you want to follow my progress you can do so on Twitter and Github.

Understanding HTML5 srcset

Every since responsive design became a thing people worried about the sizing of their images. Why would you serve an image that's 1800px wide to a device that is only 320px wide? That's a very sensible question to ask when you're dealing with responsive design. If you consider that this 320px wide device might very well be a mobile device that's using a 3g connection this question makes even more sense.

Working towards a solution

I think it was a few years back when I saw people debating the problem of responsive images. How would the syntax look? How will it be compatible with current browsers? HTML5 isn't supposed to break backward compatibility. I've seen a proposal for a element come by. That looked promising. But eventually something else come along and it’s called the srcset.

How it works

The srcset is an attribute that takes a list of images that a browser can choose from. By using the srcset you are trusting the browser to make smart decisions about what image is shown. If you want to have a crop out on small screens and then a different version on larger screens you might be in for trouble. Browsers are entirely free in their interpretations of the srcset attribute. They (should) try to be sensible, but Chrome 40 caches your images, so making the viewport smaller won't trigger a reload on the image. Why? Well, why would it. The browser assumes that a higher resolution version is 'better' and it won't waste an extra URLRequest on fetching your lesser quality image.

An example of using srcset:

<img src="http://placehold.it/100/100"
     srcset="
         http://placehold.it/320/320 320w,
         http://placehold.it/640/320 640w,
         http://placehold.it/960/320 960w">

The list we define uses this syntax:

<image_src> <image_width(in pixels)>

We could also define a pixel density:

<image_src> <pixel_density(2x, 3x etc.)>

But I want to have some control

Understandable, while this feature is somewhat of a block box, it isn't all black magic that goes on as Chris Coyier illustrates in his blogpost: You're just changing resolutions. The bottom line of what he illustrates is that a browser uses simple math to determine the best image for the current case. What the browser does is devide the image width by the viewport width and then checks how it matches the current pixel density.

Example from Chris' blog:

320/320 = 1
640/320 = 2
960/320 = 3

Imagine using a browser that has a 320px viewport. The images we are offering are 320, 640 and 960 wide. The browser does the math and decides that the 320w image is the best one since we are using a 1x pixel density device.

Now imagine using a retina device in this same case. The browser will now use the 640w image. Why? Well, a retina device has a 2x pixel density. So the 320 image is too small. It needs a bigger image.

More control please!

The browser always uses the full browser width to do the math now. But what if we needed to display images at 33% of the viewport on large screens, 50% on medium screens and 100% on small screens?

This is a question I really struggled with for a bit because here's where things get magical. But luckily we got a great mental model to work with because of Chris Coyier's examples. So, let's use the same html snippet we had before but we add the rules I just specified by using the sizes attribute.

<img src="http://placehold.it/100/100"
     srcset="
         http://placehold.it/320/320 320w,
         http://placehold.it/640/320 640w,
         http://placehold.it/960/320 960w"
     sizes="
         (min-width: 768px) 50vw 100vw,
         (min-width: 1200px) 33vw">

I've used some arbitrary values for the definition of what is small, medium and large. The first line should be explained, it says that we intend to display images at 50vw on a viewport that's 768px wide. If it's smaller we use 100vw. We don't need that "if it's smaller" part on the second size because we already defined a rule for that.

Okay, so now let's do the math again:

(320*1)/320 = 1           //viewport is smaller than 768px
(640*1)/320 = 2           //viewport is smaller than 768px
(960*0.5)/320 = 1,5     //viewport is smaller than 1200px

With this math we can see that viewports at 320px and 640px are actually pretty predictable. A viewport with 960px is a strange case. The browser will either pick the 320w image or the 640w image. I would expect the 640w image because 1.5 can be easily rounded to 2 but it's up to the browser to implement this.

Using the srcset

I feel like it's pretty save to go ahead and use the srcset right now. The fallback is good since it's just a regular tag with a source attribute set on it. However, do make sure that you sort off understand how a browser might interpret your srcset. It could save you many headaches and it will make you feel more confident about using it. if you want to know more about the srcset I strongly recommend reading You're just changing resolutions since it's a good description of how srcset works.

Questions, comments, notes, feedback, it's all welcome. You can find me on twitter for that.

Weekly Swift 2, getting somewhere

Time flies, it’s been two weeks since I started my daily Swift adventure and I only missed one day. Pretty impressive I’d say. The second week of the daily Swift was a very practical one. It was all about looking through Arto App and building the components I’d need to actually build this app. Building it will be a part of the daily Swift.

Layout and Constraints

A big part of creating a beautiful feed of content seems to be understanding the UITableView. More specifically, understanding UITableViewCells. These Cells are the core of what a user will see and interact with and giants like Twitter, Facebook and Instagram all implement some custom version of a UITableViewCell.

Making my own cells appeared very hard at first. I didn’t understand what would go where, and how would I make it auto-resize itself to fit the content I put into in. After digging around on the web I found a video tutorial and a blog post that explained in part how to do this. Then I started experimenting with LayoutConstraints. These Constraints are extremely powerful and surprisingly simple to write. I’ve been getting warnings every time I used them but everything looks and works fine so I’m not sure how serious these warnings are. An example:

"V:|-10-[myObject]-10-|"

The text above defines a Constraint in Visual Format Language (VFL) and it says that on the vertical axis(V) there should be a 10 point margin (-10-), then myObject appears and then there’s another 10 point margin (-10-). The width of myObject in this case will be the container’s width minus two times a 10 point margin.

Delegation

In order to load data from the Arto API I wanted to have an object in my application that would manage this. I decided to use delegation as a callback style for this object. The implementation of this is actually pretty simple. In the same file as the one where I defined my API class I wrote a protocol. A protocol is defined by naming it and specifying methods that should be implemented by the implementer of this protocol.

Then, whenever the object finished loading some data I call:

self.delegate?.didReceiveData(data)

The fact that there’s a question mark after self.delegate  tells the compiler that there might not be a delegate. If the delegate isn’t there, everything after the question mark is ignored.

So all I needed to do to use this, is make an ArtoAPI instance in my TableViewController, implement the ArtoAPIDelegate protocol and use the data that gets passed to didReceiveData . Easy enough, right?

Bonus: A header image effect

There’s some apps that have this very nice expanding header effect when you pull down the content below it. I have recreated this effect and it actually was surprisingly easy.

Expanding header example

Putting it together

At the end of the week I’ve spent some time putting together the parts. Combining the UITableView, UITableViewCell, ArtoAPI, loading images with Haneke and more. A picture says more than a thousand words so here’s a gif that shows the end result.

Arto Feed Example

In conclusion

The second week of the daily Swift was all about working towards creating and iOS version of Arto App. I’m still trying to decide whether week three should keep pushing forward or if week three should be about learning more small things like an off-screen menu, blurring images, using shadows, implementing POST requests to push data to a server and more of that. If you can’t wait to find out, make sure to follow me on Twitter or check out my Github regularly. Thanks for taking the time to read this and if you have any tips, comments or feedback be sure to let me know.

This week's repositories