The idea is very simple: instead of using a regular LTE connection, your device (iPhones and high-end Android devices) connects to the wifi you already have in your house and opens an encrypted tunnel with your mobile carrier infrastructure.
Wifi calling is great for telcos because with a software upgrade in their infrastructure they can have good voice coverage indoors where cell signal is not good, and the cost of improving the network is not cost effective; but you can still reach that place with broadband (ie: big residential areas)
The catch for telcos? during a wifi call you’re not roaming: the tunnel ends in your local operator infrastructure. This means it’s a local call.
I find this great for airports that give you some free wifi minutes (like Zurich, and Madrid).
I can’t remeber how old I was when I started coding. Maybe I was 9 or 10 years old.
Back then, we coded in basic directly in a spectrum 48k computer via a horrible command-line interface that we found (now surprisingly) pleasant.
Nobody did anything to test their programs. You just run it, typed something on the keyboard, and it worked (or not).
Nowadays stuff is not that simple. You’ve got to test a lot your software because you don’t have control on where and how your app will be used.
How to make software testable?
If you started coding back in the 80s, there’s almost impossible to resist the urge to have a quick’n dirty prototype just to see results (yep, there was no REPL available those days).
So, after having *some* results (without formal testing). It’s easy to convert your mini app to something testable.
This is the “trick”:
Refactor your quick’n dirty protoye into two pieces:
A shell with minimal functionality that looks like pseudocode. If you’re coding in C, your app main() function will be there. A bunch of simple code that you can test (and hopefully reuse in the future).
Strip everything from your codebase until looks like pseudocode. This pile of code is almost untestable. Just make a formal technical revision on it and let it live (or die if it’s not okay).
All code that doesn’t read like in english or pseudocode should be “promoted” to a function and stored in a separate file.
Compile and run the app. Everything works as before, right?
When you have everything modeled as a function, testing is easy: just make a program that feeds that function with all the data you can imagine and compare it to the expected result. You don’t need a testing framework to do this: a simple program might do it.
BTW – if your code has global variables, or uses singletons you should refactor your logic to remove them before making anything “testable”.
I’ve just finished reading a review copy of Talend for Big Data, courtesy of Packt Publishing. I’ve been using Talend for ETL and automation tasks for some years and I wanted to start using it to feed data into a small hadoop cluster we have, so I think I can be able to put myself on this book readers shoes easily.
Book structure: a journey in Big Data
I’ve enjoyed the book follows a real use case of sentiment analisys using twitter data: I was getting tired of examples word counting / term extraction examples found in other Hadoop texts.
Although the book doesn’t describe in depth how to get the data from the twitter API using a Talend component (there are many available for this task), I think the information is enough to follow the steps in the book: Keep in mind the use case is an excuse to work with talend and big data.
The structure is very straightforward and It resembles closely a real world Big Data integration job:
The basics: what’s Talend, what’s hadoop, and how to get started (terminology and setup)
How to get data into a hadoop cluster (there’s a component for that: tHDFDOutput)
Working with tables (hive) in Talend using Hive.
Working with data using Pig.
Loading results back to an SQLdatabase using Apache Sqoop
And finally, how to industrialize this process.
In the real world you’ll surely choose between Hive and Pig to make your project simpler. Having a chapter for hive and another for pig lets you see and compare both technologies and helps you choose the one you feel more comfortable working with.
I’ve also found very interesting using Apache Sqoop to getting the data out of Hadoop back to the SQL World.
I didn’t know about Sqoop before reading the book and I was tempted to extract the data from Hadoop using a Talend job as a bridge. Dont’ do IT!. Using Sqoop is much better because it can paralelize the load job. It remembers me how to make backups using a disk cabin vs using a server agent (just tell the cabin to do the backup by its own vs copying all the data to a point and move it around).
Contexts! I’ve ever thought the best part of Talend are contexts and I find great to see all the examples in the book using contexts since the beginning.
In chapter 4 we learn how to use UDF (user-defined-functions) with Hive inside Talend. In the book the problem it solves is Hive does not support regular expressions; but It gives us a clue that may allow us to do something with interesting with other kinds of data, like images or audio files.
The way Talend works with Pig is easier that I expected. Why? because you dont’ need to know anything about Pig latin code to get results. I expected something more complicated. In fact, I thing I’m going to use tPig* components more frequently than the Hive ones.
The chapter about using Sqoop with Talend. For me, this chapter just justifies buying the book because it saves you a lot of time.
I discovered in the book that Talend doesnt include all the JARs needed to work with Hadoop. This is not a technical problem per se; but a legal one: Talend cannot distribute the hadoop files under their own license. Fortunately the guys from Talend have made available a one-click-fix.
At first glance I found the book short. Maybe I’m used to technical books with a lot of literature and this book has a very practical how-to-make-things-happen approach. I hope to see a second edition soon with dedicated to Google Big Query (which, by the way, is supported by Talend in the latest release with its own set of components).
Conclusion: concise, hands-on book about data integration with Talend and Hadoop. Highly recommendable even if you just want to extract data from an existing hadoop cluster.
This happened long before LinkedIn,Taleo, or Jobvite existed. The trick hack still works. Give it a try.
A friend needed to fill in a hurry a developer position for a client. He posted the opening in a job site and waited for several days hoping he would find a good candidate for the job; but It didn’t happened.
He received 5-10 CVs; none of them was a good match for the position. Some lacked experience, and others didn’t have enough technology knowledge.
We conceived this hack after lunch. He told me about his problem trying to fill this position and what started as a joke ended.
0- Go to a physical / brick-and-mortar bookstore.
1- Find a book the potential new employee will need to use in the job. For developers a safe bet is any O’Reilly language cookbook, or Pragmatic Programmers book.
2 – Choose a chapter from the table of contents that deals with the specific skill that’s giving you a headache.
3 – Insert a postit note somewhere within that chapter that says something like this: