I recently migrated the We Heart It Android app project to Android Studio and I thought I’d migrate the Android-ImageManager library into the new format as well. That was the easy part. Another thing I wanted to do was to upload it to Maven Central, so it would make people’s lives easier when using it.

Ok this was painful. Here are the steps I took in order to achieve it, so hopefully people won’t struggle that much doing it in the future:

  1. Follow Chris Banes’ steps;
  2. Follow the steps here to configure the upload username and password;
  3. Make sure you edit gradle.properties and build.gradle with your library’s information (version, description, links, etc);
  4. Follow this guide. To summarize, you’ll need to create a Sonatype JIRA account and then you’ll have to create an issue there in order to create your project. That is basically apply for open source project hosting (that’s basically the steps 2 and 3 on the user guide mentioned above);
  5. Wait for your issue to be “Resolved”, that may take up to 2 business days (!!!). Someone from Sonatype will do it and you’ll get an email;
  6. Run ./gradlew clean build uploadArchives to build and upload your project into Sonatype. IMPORTANT: You have to upload a Debug build first, otherwise it is not gonna work. You do that by adding -SNAPSHOT at the end of your VERSION_NAME on gradle.properties, like this, eg.: VERSION_NAME=1.0.0-SNAPSHOT
  7. After that, if all went well, you should see your project here. You can navigate through the folders. My project’s group id was com.felipecsl.android, so I could find it at http://oss.sonatype.org/content/repositories/snapshots/com/felipecsl/android/

I still havent figured out how to promote a Staging Repository. Right now it is failing to close the repository since the package is not signed. I will update this post once I have a solution. In the meantime, you can already use your snapshot with Maven/Gradle. Here is how your build.gradle would look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath 'com.android.tools.build:gradle:0.6.+'
    }
}
apply plugin: 'android'

repositories {
    maven { url "https://oss.sonatype.org/content/repositories/snapshots/" }
    mavenCentral()
}

dependencies {
    compile 'com.felipecsl.android:library:1.0.0-SNAPSHOT'
}

Well that is pretty much it for now. It took me almost an entire afternoon to figure this all out, hope it is useful for other people. If you’re interested in a simpler and a lot more elegant solution for Open Source library project hosting, check rubygems.org which is used to, guess what, host Ruby gems. That is really piece of cake compared to all this Java mess :sigh:

This week I’m attending AnDevCon in San Francisco and in the first tutorial in the first day of the event, I watched a really nice workshot by Chiu-Ki Chan’s titled Hands-on Android Custom View Workshop. The code is here on Github.

We built a simple custom view using circles, arcs and onDraw to draw some kind of pie chart that reacts to click, sends change events, etc.

I implemented the FractionView with some small changes, animating it into some kind of clock/stopwatch thing that animates by itself. Below is a GIF with the sample app running.

Please feel free to play with it and check out my fork on Github.

Working with animated GIFs on Android can be a painful task. In the We Heart It app we deal with a lot of images, but also animated GIFs. Since we support Android back to API Level 8 (2.2), WebView is by far the simplest solution, however is not an option for us, since it doesn’t work on most of older Android versions.

By looking at this excellent tutorial by Johannes Borchardt, I decided to take GifDecoder approach, which seems to be the most reliable solution. However, the GifDecoder class suggested by Johannes is not very memory efficient since it keeps in memory the bitmap data for every frame in the GIF.

By doing some more research, I found this great gist with an optimized implementation of GifDecoder that, as described there, “decodes images on-the-fly, and only the minimum data to create the next frame in the sequence is kept”.

The class interface is however not exactly the same as the one exposed by the original GifDecoder class provided, thus the GifDecoderView class had to be adjusted. I made the adjustments required to interact with this version and also start/stop the animation. By default it will loop the GIF animation, even if the view is not on the screen, so you have to be careful to call startAnimation() and stopAnimation() correctly to avoid GIFs playing in the background, which can eat up all your memory very quickly.

My interaction looks basically like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  // In my fragment class...

  public void onCreate() {
    gifView = new GifDecoderView(getActivity());
    gifView.setBytes(bitmapData);
    gifView.startAnimation();
  }

  public void setUserVisibleHint(boolean isVisibleToUser) {
    super.setUserVisibleHint(isVisibleToUser);

    if (!isVisibleToUser) {
      gifView.stopAnimation();
    }
  }

You can get all the code here in this Gist.

I’ve started with Android development a few months ago with the upcoming We Heart It app. Here are some random thought and impressions I had during this period, and the motivation behind building the Android-ImageManager library:

  • API Complexity: Compared to the iOS SDK, at first, the Android SDK seemed to me overly complicated and over-engineered. There are specific patterns you need to follow for most common scenarios and avoiding them just end up making it harder, so you basically have no choice. For example, the AccountManager API, which you need to use if you want to create an Android built in user account. There are several interfaces you have to implement, services to create, methods to override, etc. It is very painful overall.

  • Version Compatibility: If you want to support a decent range of devices and Android versions, you’ll spend countless hours of testing and have to buy 10 different phones and tablets, since the Emulator is simply unusable (terribly slow). Stick with already proven open source libraries for a less painful development experience, like ActionbarSherlock, ViewPagerIndicator, HoloEverywhere, DiskLruCache, etc. You can thank Jake Wharton for most of that awesome work.

  • Automated Testing: Painful as well. We decided to go with Robolectric which basically stubs all the Android APIs for you. That has pros and cons, but definitely good that your tests run super fast (like seconds) and you dont have to keep stubbing everything in order to make your tests work. All you need is a simple jUnit project. Haven’t looked much into Integrated UI tests yet, but we definitely want to check that out soon. Android provides some classes for this job.

  • Memory Management: This would deserve an entire article just for it. If you start working with images, galleries etc. or just do something that is not plain trivial, you’ll sooner or later start seeing OutOfMemoryError. It is hard to get it right, and there are many things you need to be aware of when building an application, specially for We Heart It, which is totally image-heavy. Some of the techniques definitely include

    1. Cache your images so you don’t keep requesting the same images over and over again and wasting all the user’s data bandwidth;
    2. Request a sampled version of the images that is adequate for the screen size/display dimensions where it is gonna be used;
    3. Recycle your bitmap when it is not being used anymore, so you can reclaim unused memory faster.
    4. Avoid creating too many instances: When working with a REST API, it is easy to instantiate new objects all the time and keep them around, send to the Activities, etc. Try to keep your footprint to a minimum. Think about all object instantiation and whether it is needed or not. Sometimes all you need is a simpler version of your domain object to be shown in the UI, so keep a simpler version of your model that contains only the needed information to be displayed.

Some of these reasons above motivated me to create the library Android-ImageManager, which takes care of caching and storing the displaying efficiently. It uses a two level cache, first in-memory using an `LruCache“, then in disk, using DiskLruCache to save the files to the device’s SD Card. If an image is found in the memory cache, it is fetched from there, otherwise, it tries to retrieve it from disk cache. If it can’t be found either, then it finally downloads the image and stores it in both caches. Everything is done assynchonously, obviously. The library is still in early stage, as I plan to add more samples and polish it a bit more. So check out the Github repository.

Think these two strings are the same?

1
2
"R. Padre Chagas 342"
"R. Padre Chagas 342"

Pretty much, right? So we open IRB:

1
2
1.9.3p0 :085 > "R. Padre Chagas 342" == "R. Padre Chagas 342"
  # => false

WTF!!? Took me a couple minutes of head scratching to figure out.. let’s look closer:

1
2
3
4
1.9.3p0 :086 > "R. Padre Chagas 342".bytes.to_a
  # => [82, 46, 32, 80, ... , 67, 104, 97, 103, 97, 115, 32, 51, 52, 50] 
1.9.3p0 :087 > "R. Padre Chagas 342".bytes.to_a
  # => [82, 46, 32, 80, ... , 67, 104, 97, 103, 97, 115, 194, 160, 51, 52, 50]

`

Haa. There it is.. an extra hidden byte!. Crazy huh? Even closer:

1
2
3
4
1.9.3p0 :088 > "R. Padre Chagas 342".byteslice(9..15)
 # => "Chagas " 
1.9.3p0 :089 > "R. Padre Chagas 342".byteslice(9..15)
 # => "Chagas\xC2"

This is what happens when you scrape data from the web… Do not believe everything you see :)

If you never heard of it, Chef is a server automation tool for server management tasks. It is often related to the terms DevOps and Infrastructure as Code which have started to gain quite a bit of attention lately. If you’ve ever seen yourself doing the exact same steps, like installing MySql on a server several times in a row, always doing the exact same steps, and wondered if there was a more intelligent way of doing that, then Chef is for you.

No, not this kind of chef :)

This is not the only option available. Other well known tools include Puppet and Vagrant. The latter, though, being targeted at virtualized development environments, while the former, are targeted at real (production) machines.

So, ‘nuff said, here are some notes I took from my first time playing with Chef. It was far from being a straightforward process, that is why a good old blog post comes handy. Turns out it was pretty painful just to get it started, and I didn’t even get to the part where you prepare the recipes and cookbooks (hmmm :)).

So to get started, I fired up a brand new clean Ubuntu 12 instance in Amazon EC2. If you need help doing that, there are plenty of documentation online for how to get started on AWS. Go there and search, I will wait here.

Ok, so now that you have an instance up and running, connect to it via ssh (tip: you will need the private key file .pem):

ssh -i <path_to_your_pem_file> ubuntu@ec2-xx-xx-xxx-xxx.us-west-1.compute.amazonaws.com

Install chef-server. There is a wiki for that here. After you run apt-get install chef chef-server, most likely, at t he end, chef-server will fail to start (at least that is what happened to me)

Ok, if you try to start the server manually via the command chef-server, then it might end up exploding with this error:

NOTE: Gem.activate is deprecated, use Specification#activate. 
It will be removed on or after 2011-10-01.

This is basically saying that you are using the wrong version of ruby and/or rubygems. The plain ubuntu install only comes with ruby 1.8 installed which is probably not what we want. So we’ll install rvm and ruby 1.9.

But, before that, let’s install some dependencies:

Run this to install some required packages that you gonna need… some of them are probably not needed for this, but it wont hurt to install them anyways.

sudo apt-get install build-essential bison openssl libreadline6 libreadline6-dev curl 
git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-0 libsqlite3-dev 
sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev -y

Done that, install rvm, ruby and some required gems:

curl -L https://get.rvm.io | bash -s stable --ruby
gem install yajl-ruby -v 0.7.7
gem install chef
gem install chef-server

At this point, you should be able to just run chef-server see it running. If all goes well, you might want to do chef-server -e production -d to run it in production environment and daemonize, so it runs in background. You will also need to start the webui via chef-server-webui -e production -d. If all goes smoothly, you should be able to access the webui at **http://ec2-xx-xx-xxx-xxx.us-west-1.compute.amazonaws.com:4040/**. As you probably noticed, the webui runs in the port 4040, while the chef-server runs in 4000. Don’t forget to open these ports in the AWS Security Group and allow your machine IP to access them!  But wait, this was only the start hahah! Now comes the server configuration…

If you are still following the wiki steps (you should), by now you should be running knife configure -i. Knife is chef’s command line configuration tool, it can do a lot of stuff. This is gonna try to create an API user. If you are unlucky, like me, you will see this error:

Creating initial API user...
ERROR: Server returned error for <a href="http://ec2-50-18-239-212.us-west-1.compute.amazonaws.com:4000/clients/ubuntu">http://ec2-xx-xx-xxx-xxx.us-west-1.compute.amazonaws.com:4000/clients/client_name</a>, retrying 1/5 in 3s
...

This probably means chef cannot connect to RabbitMQ (the guy that does the messaging between chef components). To double check, run this:

sudo rabbitmqctl list_permissions -p /chef

… and you should see this error:

Listing permissions in vhost "/chef" ...
Error: {no_such_vhost,<<"/chef">>}

So this is how you fix it:

sudo rabbitmqctl add_vhost /chef
sudo rabbitmqctl add_user chef <password>
sudo rabbitmqctl set_permissions -p /chef chef ".*" ".*" ".*"

Gotcha: The rabbitmq password has to be the same one found in /etc/chef/server.rb config file (mine was ‘root’)

Ok, this should get you all set. Last step, verify the knife configuration running knife client list. And guess what?! More errors!

ERROR: Your private key could not be loaded from /home/ubuntu/.chef/ubuntu.pem
Check your configuration file and ensure that your private key is readable

This happened to me because when I ran knife configure -i, I set as the new username, a username that was somehow already in use (ubuntu). So the solution was to run it again and set a different new username when asked. This should create your API client and get you all set.

ubuntu@ip-xx-xxx-xxx-xxx:~/.chef$ knife client list
 Lixaredo.local
 admin
 chef-validator
 chef-webui
 felipecsl
 local

Ok here is another gotcha. If you ever turn off your instance and bring it back later, it will most likely change its public dns (*.compute.amazonaws.com…). Make sure you update it in your knife.rb file or it wont work. Mine was located at ~/.chef/knife.rb

log_level :info
log_location STDOUT
node_name 'local'
client_key '/home/ubuntu/.chef/local.pem'
validation_client_name 'chef-validator'
validation_key '/etc/chef/validation.pem'
<strong>chef_server_url 'http://ec2-xx-xx-xx-xx.us-west-1.compute.amazonaws.com:4000/'</strong>
cache_type 'BasicFile'
cache_options( :path => '/home/ubuntu/.chef/checksums' )

Ok that’s it for today. Next time I will talk about recipes and cookbooks!

I don’t usually post short tips, but this one is really killer. To delete one word at a time when pressing ⌘+Delete in the iTerm2, use this configuration as shown in the first line of the screenshot below (click to expand).

It has been a while since I’ve been working on this Ruby gem called wombat and thought it would be worth talking about it here. I just released today the version 0.4.0 with some bug fixes and the addition of another helper method, as you can see in the GitHub readme.

Wombat is a gem for making extracting information from web pages, or just crawling, a little less painful. If you were to do that today, you’d probably have to fall back to Nokogiri or another lower level gem and process the data yourself. With Wombat, you can define a structure of how your data should look like in a DSL like format, and Wombat does the work for you. It is pretty handy actually.

It all started with the project Noite Hoje where we used to grab shows and parties informations from 3rd party sites and display there, like an indexer. When working on that, I noticed how painful and repetitive it can be to crawl web pages sometimes.

Anyway, if Wombat is/was useful for you, please don’t hesitate to drop me a line in the comments, I would be very happy to know.

After meeting great people like Paul Irish and Yehuda Katz, you kinda feel almost obligated to share some knowledge with your peers and give back to the open source community.

By the way, if you work with Resque, you should probably also check the great perform_later gem by my friend and coworker Avi Tzurel, to which I also have been contributing.

Cheers!

A lot of people I meet seem very curious and I am usually asked some details about how the hell I left my job at ThoughtWorks Brazil and moved to San Francisco to work for Gogobot. Overall, the transition was pretty simple. Getting the H1B visa was a lot simpler and faster than I expected. It took me around 2 months from the date Gogobot entered the petition to the US Gov. to the day I got my passport back with the visa stamped on it. However I was a bit lucky, since there is an early quota of 60 thousand H1B visas that US will distribute, starting every year in April. I applied around October, so the quota was almost over, but I was lucky enough to make it. Another friend that applied a few weeks later wasn’t so lucky and will have to wait till April till the window opens again.

Another thing that people usually ask me is how I got to know the company and how did I get the job. From my personal experience and from what I heard from other fellow developers, companies are not very receptive to the idea of sponsoring an H1B visa to a foreign developer, probably due to the high cost and risk involved with it. The usual flow is: you get in touch with the company you are interested in, probably send an email with a nice cover letter and your resume attached, and get no response at all. The only advice I will say is: If you really want that, be persistant and keep trying. Write a nice cover letter, with a brief introduction of who you are and why you want to work for that company. Try to be creative, original, otherwise you will be treated just like yet another average foreign candidate. Also extremely important is showing your achievements, having one or more pet projects is a big plus, actively contributing to open source, etc.

It is definitely not an easy task, nor an easy decision to leave everything behind and move to another country. Anyways, I still believe it is something extremely valuable in terms of career growth and life experience. If you can, I would advice you to do it! The sooner the better. As you get older, buy a house, have kids, marry, etc., it will be harder and harder to do something like this. For the same reasons, I support entrepreneurism.

I’ve been here for almost one month so far, and right now, I can say I am very happy to have taken this decision, despite having to leave friends, family and my beloved fiancee behind. We will try to soften the hurt by visiting each other periodically.

Gogobot

All in all, Gogobot is a great place to be, where I have the opportunity to work very talented and nice people and also experience the uniqueness of working for a Silicon Valley tech startup with milions of users and great media exposure!

Desde Janeiro estou trabalhando como consultor na ThoughtWorks, no “prédio novo” do TecnoPUC. Posso afirmar com plena convicção que estou muito feliz de ter tomado essa decisão e, por isso, resolvi que seria legal compartilhar um pouco de como é o nosso dia-a-dia lá dentro.

A primeira impressão ao entrar no escritório é de que algo ali é diferente das empresas de TI tradicionais. Não há baias, apenas grandes mesas sem divisórias. Pessoas trabalhando em pares, ou seja, dois para cada computador, NERF darts pelo chão, video-games no lounge e pessoas falando alto.

A ThoughtWorks é conhecida mundialmente por ser uma das pioneiras na prática de metodologias ágeis. Isto faz com que ela tenha uma cultura à parte, que incentiva a colaboração e entrega de alto valor agregado. A empresa tem uma missão um tanto quanto ousada: “Revolucionar a TI”. Para tal, possui um modelo de três pilares que descrevem os seus principais propósitos: Sustainable Business (negócio sustentável), Software Excellence (excelência em software)  e Social Justice (justiça social). Os dois primeiros são totalmente esperados para uma empresa de TI, entretanto, justiça social é um valor que poucas empresas valorizam tanto quanto a ThoughtWorks. Isto faz com que procuremos antender apenas clientes que se alinhem com o nosso perfil, evitando fazer negócios com aquelas que vão de encontro com estes valores.

Entretanto, o que mais me chamou a atenção lá foi a autonomia que é dada a cada funcionário. Em primeiro lugar, não há uma hierarquia definida e nem um conceito de “chefe”. Cada um exerce seu papel e é exigido de acordo com o que é esperado dele. Caso você não esteja satisfeito com alguma coisa, você tem toda a liberdade para propor uma sugestão ou conversar com seus colegas para tentar identificar problemas e propor soluções. Essa é uma característica bastante única na minha opinião e foi a que levei mais tempo para assimilar completamente. Por estes motivos, não é raro ouvir pessoas falando que é necessário “desconstruir” para então “reconstruir” vários conceitos quando se entra na empresa, quase como uma lavagem cerebral. :)

O processo seletivo da ThoughtWorks é bastante completo, exigente e extensivo. A empresa se propõe a ser um lugar para as mentes mais brilhantes naquilo que fazem. Por isso, costuma-se dizer que ser um Thoughtworker não é para qualquer um. São testadas várias características do candidato, desde valores e perfil até capacidade de raciocínio lógico e habilidades como programador. São várias etapas classificatórias e eliminatórias. O processo todo não costuma levar menos de um mês por candidato, conversando com cerca de 10 pessoas no total. Apesar de difícil, não é impossível, afinal, eu consegui. :)

A TW também é uma empresa global por natureza. Com diversos escritórios pelo mundo todo, a rotação entre países é incentivada. Com isso, é muito comum termos muitos Thoughtworkers viajando entre escritórios e trabalhando nos clientes. Durante qualquer período do ano, temos pelo menos 10% de expats (TWers que vieram de outros países) no escritório da TW Brazil. Lá dentro, a língua oficial é o Inglês. O Leonardo Borges, que trabalha na TW Austrália e escreveu sobre as experiências dele, não me deixa mentir! Nenhum escritório tem mais de aproximadamente 150 funcionários. Quando ele atinge este limite, geralmente ele pára de crescer e procura-se outro lugar para abrir um novo. O motivo disto é muito simples: Depois de uma certa quantidade de pessoas dentro de um escritório, acaba se tornando muito difícil de conhecer todo mundo e de se manter um relacionamento estreito entre as pessoas. Por isso, existe este limite que procura garantir a identidade da empresa e a interação entre as pessoas.

Com esse post, espero ter passado uma noção de como funciona nosso dia-a-dia na TW e ter mostrado como ela é uma empresa realmente diferente. Para quem já trabalhou em uma startup ou uma empresa bem pequena (menos de uns 20 funcionários, por exemplo), vai se sentir em casa lá dentro, pois, apesar de ter mais de 1600 funcionários no mundo todo, a TW parece continuar mantendo as boas características de startups de tecnologia onde tudo é flexível e se trabalha com prazer. :)

Se você curtiu a TW, é apaixonado pelo que faz e gostaria trabalhar conosco, não deixe de entrar em contato. Afinal, estamos contratando. :D