No 2/2013
The DEVELOPER NDC SPECIAL EDITION
FOR DEVELOPERS AND LEADERS
CODE FOR KIDS Page8
PLAYING WITH DRONES AND STREAMS Bjørn Einar Bjartnes Page 4
POWER RABBIT ByAlvaro Videla Page 10 Contributors:
NORWEGIAN DEVELOPERS CONFERENCE 2013 John Zablocki
Mark Seemann
Torstein Nicolaysen
Vidar Kongsli
Helge Grenager Solheim
Jørgen Vinne Iversen
Russ Miles
Jon McCoy
Dominick Baier
Alvaro Videla
Jon Skeet
Giovanni Asproni
Alan Smith
Michael Heydt
Bryan Hunter
Stuart Lodge,
Oslo Spektrum, Oslo June12-14 Pre-workshops, 10-11 June
1
v
The DEVELOPER Playing for profit What is play? The Oxford dictionary defines play as "Engage in activity for enjoyment and recreation rather than a serious or practical purpose." In the context of developers, I would propose a more specific definition as "Coding for fun without any obvious business value". Why is play useful? Play has been found to be a critical part of how children learn. For adults, I can't speak for anyone but my friends and myself. We have found that playing helps us in learning new concepts and improves our understanding of both details and the bigger picture in what we do. By playing we acquire skills that help us break free from suboptimal solutions. How is play facilitated? My playground consists of good friends with bad ideas. Bad ideas have in our experience turned out to be the most fun. We try to be more like snowboarders pushing and cheering each other to do new tricks and less like project managers trying to plan and control. In our experience the most important lesson was to leave business value out of playing. NDC to me is a festival of creativity and a terrific playground for developers - a chance to hook up with old friends and meet new ones. Go play. Go be creative. What do we take home from this playground? I am sure I will go home fired up with inspiration, newly acquired skills and a playful attitude to handling the kind of problems people pay us to solve. Bjørn Einar Bjartnes Developer, Computas AS
"I didn't start climbing. It was all you others who stopped." Arne Næss, philosopher and mountaineer
Pre-conference Workshops June 10-11th
Get your All Access Pass now!
1-day conference
NOK
8.200,–
2-day conference
NOK
9.700,–
3-day conference
NOK 11.200,–
1-day Workshop
NOK 5.900,–
2-day Workshop
NOK
All Access Pass
NOK 18.200,–
ndcoslo.com
8.900,–
Contents ARTICLES
NDC 2013
Playing with drones and streams ......................................................... p. 4
The NDC Agenda Committee ................................................................. p. 82
Code for Kids........................................................................................................................... p.8
Entertainment and food ................................................................................ p. 84
The power of RabbitMQ. .................................................................................. p. 10
Oslo - the capital of Norway..................................................................... p. 86
A heuristic for formatting code.......................................................... p. 14
Pre-conference workshops. ...................................................................... p. 90
TypeScript............................................................................................................................. p. 18
Program Wednesday - Friday.................................................................. p. 92
But what does it all mean? ......................................................................... p. 22 Search in SharePoint 2013 ....................................................................... p. 26 Security from the ground up . ................................................................. p. 30 Erlang for C# developers ............................................................................ p. 32 Couchbase NoSQL for SQL Developers ............................... p. 34 Developers: Meet PowerShell .............................................................. p. 40 MVX: Model View Cross-Platform ..................................................... p. 44 OAuth, OpenID, JWT – WTF? ................................................................... p. 50 Stop wasting your life! . .................................................................................... p. 57 Grid Computing on Windows Azure.............................................. p. 62 A first pattern of TPL ......................................................................................... p. 64 DDD, CQRS, ES Misconceptions and Anti-Patterns.... p. 68 An very brief introduction to API Usability .................... p. 72
Publisher: Norwegian Developers Conference AS By Programutvikling AS and DeveloperFocus Ltd Organisation no.: 996 162 060 Editor: Kjersti Sandberg Address: Martin Linges vei 17-25, 1367 Snarøya, Norway Phone: +47 67 10 65 65 E-mail: info@programutvikling.no Member of Den Norske Fagpresses Forening Publication schedule 2013: 15. August, 15. November
RD
IC ECOLAB
67 2
24 1
Course overview London . ............................................................................. p. 78
Print run: 13,000 Print: Merkur Trykk AS
EL
Course overview Oslo ........................................................................................ p. 76
NO
Cover photo: Kristoffer Sunnset. Uncredited photos are from Shutterstock, except portraits.
COURSES
Pr
int in
g compa
ny
No 4/2012
Payment
HLR
Booking
Survey
The DEVELOPER
Advertise in The Developer
Mobile ing market
ing
SMS Billing
Position
Verifying
ation
ord
NDC Edition
N 2 2013 o
N Club E W S A N D I N F O R M AT I O N F O R D E V E L O P E R S A N D L E A D E R S
Loyalty
Merge SMS
EXPLORING FUNCTIONAL PROGRAMMING
MMS
Voting
Marketing
By Venkat Subramaniam Page 16
and meet your target group!
No 1/2013
g
ation
Confirm
rience
Password
MMS
The dEvELoPEr Mobile ng marketi
N E W S A N D I N F O R M AT I O N F O R D E V E L O P E R S A N D L E A D E R S
Loyalty
Club
sCrolling both Ways Merge SMS
Voting
Winrt for the ios developer
ng
Marketi
by iris Classon page 8
For more information about advertisement, please contact Henriette Holmen at 976 07 456 or henriette.holmen@programutvikling.no
DEMONS MAY FLY OUT OF YOUR NOSE
Payment
HLR
By Olve Mauldal
Mo Pagebile 10 CYNEFIN SMS - simply ketingthe world’s most effective communication method mar
ing
Verifying
SMS Billing
Survey
Position
mers in more than 100 countries
Booking
9 20 69 20
Love SMS
Reminders
Survey
We love SMS!
ring of all events worldwide 24/7
ord ss - Norway
SMS Billing
HLR
Ordering
sy integration
e worldwide
bout 700.000 SMS per day
ation
Voting
erin method s most effective communication Posisjon
Verifying
n
Donatio
g
s
Warning
d Passwor
Positionin
ld, it´s about getting things to happen faster!
MMS
FOR DEVELOPERS By Liz Keogh Page 6
Club Loyalty 15 years’ experience
e
Merg 13 APIs for easy integration
SMS WHY YOU VotingSMS wholesale worldwide
Marketing For more information: www.vianett.no
SHOULD CARE ABOUT FUNCTIONAL PROGRAMMING
Dispatch of about 700,000 SMS per day
Online monitoring ofBy all events worldwide 24/7 Neal Ford Business customers inPage more 4 than 100 countries Direct third-line support to our developers
n
d Passwor
Booking n
atio Confirm
Donatio
SMS Billing
Rem
ord Rabekkgata 9, Moss - Norway Passw
Survey Payment
HLR
sales@vianett.no l +47 69 20 69 20
Ordering
Verifying inders
Voting
Posisjon
ering
MMS
g Positionin
Single point of contact for worldwide SMS distribution
Mobile ng marketi Loyalty
Voting
1
Club
Merge SMS
ng
Marketi www.vianett.com
CoffEEsCriPT
by Jøran lillesand and eirik lied page 4
For more information:
TransiTioning suCCEssfuLLy from ThE iT sidE To BusinEss by howard podeswa page 20
1 Developer_1_2013.indd 1
11.02.13 15:53
3
PLAYING with drones and streams
4
From left: Einar W. Høst, Bjørn Einar Bjartnes and Jonas Winje.
We tend to apply styles and architectures we already know rather than creating a design that is tailored to our domain. Adding features in an illdesigned architecture leads to code that is hard to reason about, resulting in higher cost and more bugs. By playing we can break out of our routine and learn to think creatively about design. If everyone played more with different ways of doing things, we would be better at designing programs, resulting in leaner code with fewer bugs. By Bjørn Einar Bjartnes. Photo: Kristoffer Sunnset
In this article I’ll investigate the stream paradigm by playing with drones. It’s an old paradigm, but was fairly unfamiliar to me before I bought a drone and started playing with it using node.js. Now I am in love with streams. STREAMS Streams are sequences of data made available over time. Streams can be visualized as a conveyor belt that makes data available in ordered chunks. Rather than waiting until the entire batch is produced, streams will emit a chunk of the sequence as soon as it is ready. This can be useful for many reasons. Streams require less resources for storage, as they do not store the whole batch before moving on to the next processing step. In computer terms this means less memory usage, for example by chunking up a file and sending it in pieces as opposed to reading the entire file into memory. Latency is also reduced as the first parts of the data is sent to consumers straight away. This also fits well with modeling infinite sequences and modeling data that might not yet be available, for example being able to send an infinite sequence of measurements without having to wait for future measurements to be available. A simple stream interface allows for building a program by building many small streams that each do one thing and then connecting them in a pipeline. Streams as implemented in Unix and in node.js are super-composable and makes it easy to build complex applications out of smaller building blocks. The philosophy of streams is captured in the words of Doug McIlroy: "We should have some ways of connecting programs like garden hose-screw in another segment when it becomes necessary to massage data in another way. This is the way of IO also." Doug McIlroy. October 11, 1964 THE PLAYFUL APPROACH To break free from traditional architectural patterns, it helps playing with a new problem. A suitable toy for playing with streams should generate a sequence of data, in realtime. What better toy than drones to provide us with realtime measurements? Toys also provide a good defense
against haters who might claim you are doing something wrong in your experiments. The code you write is obviously just for fun, therefore it can be as crazy as you like. A good starting point for thinking outside the box is to use just the stuff I need to accomplish whatever I need the application to do. If I were to start with what I already know I could end up with a traditional solution based on a collection of standard frameworks that may or may not be suitable for the job. In this article I am aiming for a superlightweight, domain specific architecture based on streams and pipes.
About the drones The AR Drone 2.0 is an inexpensive, easy-to-use and easy-to-program quadrotor helicopter. It is accessed programmatically from your laptop over WLAN. The ar-drone module provides info from sensors, video from cameras and allows for sending commands to control the drone. Lower-level control, such as stabilization, is taken care of on board the drone. In this article I am accessing the sensor data only and leaving video and control for the talk on the topic.
CODE STREAMING DATA FROM DRONE TO CONSOLE Our first goal is very simple: to demonstrate that we can create a stream of real-time navigation data from the drone. The ar-drone module emits a navdata event with navigation data when it receives data from the drone. This data contains measurements such as height, estimated velocities, compass direction and much more. The events need to be wrapped as a readable stream to be compatible with the other node.js streams. Wrapping events in a stream can be done simply by pushing data into a new readable stream. To verify that the stream is working its magic, we’ll simply pipe it to process.stdout. Most streams talk in buffers or strings, whereas the navdata stream produces JSON objects. To transform the data, simply pipe data through a stream that serializes JSON into strings. This little program will write navigation data from the drone to the console, in real-time as it is being sent from the drone.
5
var arDrone = require('ar-drone'); var client = arDrone.createClient(); // Stream that emits timestamped navigation data var stream = require('stream'); var Serializer = require('./serializer’); var navDataStream = new stream.Readable({ objectMode: true}); /// Do nothing when asked to read navDataStream._read = function () {}; // Instead drone pushes data into the stream client.on('navdata', function (chunk) { navDataStream.push({ key: Date.now(), value: chunk}); }); navDataStream.pipe(new Serializer()). pipe(process.stdout);
STREAMING DATA FROM DRONES OVER HTTP Now that we have established a stream of nav data from the drone, let’s make it available for consumers on the web. In node.js, http responses are streams that can be piped to, just as we piped data to process.stdout. This allows us to provide access to real-time data in a browser simply by piping data to the response. I use the npm module express to configure my web application. Data will now be streamed to any client connecting to http://localhost:3000/rt. Again we need to add a serializer to the pipeline. var express = require('express'); var app = express(); app.listen(3000); // Serve rt-data in never-ending stream app.get('/rt', function(req, res){ navDataStream.pipe(new Serializer()).pipe(res); });
6
STREAMING DATA FROM DRONES TO DATABASE You’ll notice that we’ve had no need for a database in our architecture so far. However, in addition to serving realtime data to consumers, it may be that we’ll want to keep a log of historical data as well. Often in our applications, the database is the heart of the system, and everything else is built around it. In our lean stream-based architecture however, the database is a just another consumer of data. We can choose to pipe the real-time stream to a database just like we previously piped it to the console and to http responses. For the purposes of this application, I have chosen to use a lightweight key-value store called level db, which has a streaming interface in node.js through the levelup module. It stores data in order based on keys, so by using timestamps as keys data can be retrieved fast and in order. var levelup = require('levelup'); // Open database, create new if it does not exist var db = levelup('./navdataDB', { valueEncoding: "json"}); // write real-time data to database navDataStream.pipe(db.createWriteStream());
STREAMING DATA FROM DATABASE OVER HTTP The database provides a streaming API, a readable stream sorted by keys. Providing access to historical data can be solved by piping the database stream to the http response. Start- and end times could have been added as an input to the createReadStream method, but I’m just playing so I did not bother to include it. app.get('/history', function(req, res){ var dbStream = db.createReadStream(); dbStream.pipe(new Serializer()).pipe(res); });
MERGING HISTORICAL AND AND REAL-TIME DATA Finally, let’s see how we can combine data from multiple streams by creating a seamless integration between historical and real-time data. In this case all historical data from the database is served first, and then the stream should switch to serving real-time data from the drone. We can continue to pipe real-time data into a stream after the database stream has completed. This means we have to dig a little deeper than just using pipe, as we need to know when the first stream has ended. We also need to buffer all real-time data while we create the read-stream from the database, so that we we do not lose any data when we switch streams. Another small issue is that the http stream will close if it gets an end-event from the database stream, but by adding the {end:false} parameter to pipe we avoid that. (Not to be confused with the end-parameter to the dbStream, which is the timestamp for the key at which to stop reading from the database!) // Serve rt and historical data in same request // Buffer the rt data until history has been sent var Buffer = require('./bufferStream'); app.get('/historyAndRt', function (req, res) { var bufferStream = new Buffer(); navDataStream.pipe(bufferStream);
A BIRD’S-EYE VIEW OF THE CODE I ended up with an architecture that is quite far from the traditional .NET web stack. I have previously worked on projects providing real-time instrument data in the realworld and they all tend to rely on a standard 3-tier style architecture with requests hitting the database to get the latest datapoint. In this example we created an application based only on streams and connected them by piping one stream into the next. Data flows from the source that generates data to the consumers. If more more processing power has to be included, such as filtering and transforming data, it can be added by including new streams into our processing pipeline. One interesting point is that most of the streams don’t know about each other, but are simply connected using pipe. The merging of real-time and historical data required some direct manipulation, but this could have been solved by writing a module that operate on abstractly on streams. If you want to play more with drones, come check out the NDC talk ‘Reactive meta-programming with drones’. All the code for this example is available on https: //github.com/bjartwolf/rt-process-data
var dbStream = db.createReadStream( { end: Date.now() }); // Must not emit end, http stream will be closed dbStream.pipe(new Serializer()). pipe(res, {end: false}); dbStream.on('end', function () { res.write('\n Switching to real-time stream \n'); bufferStream.pipe(new Serializer()).pipe(res); bufferStream.start(); }); });
Bjørn Einar is a developer with Computas AS where he works with SharePoint solutions for the petroleum industry. You'll also find him coding for fun in the Computas Lambda Club and tinkering with R/C cars and drones.
7
How the Norwegian movement to get kids coding came about - and what we're up to
Code for Kids By Simen Sommerfeldt
You have all seen the video from code.org, right? Where Hadi Partov assembled the brightest stars of the american computer industry - people who in their day jobs fight each other in a brutal and healthy competition. However, in this endeavour they are all united - as we are in Norway and many other countries. Our mission is to wake the population from a slumber - the slumber of allowing the young ones to only be consumers - not creators - of technology
8
“For young people: it teaches you how to think, it unlocks creativity, and builds confidence. It’s an amazing feeling for a young boy or girl to realize, “If I don’t like something, I can change it. If I wish I had something, I can create it.” This sense of empowerment is valuable no matter what path you choose in life. (And of course, if you choose to pursue Computer Science more seriously, it unlocks amazing career opportunities.)” Hadi Partov, creator of code.org
Being on the advisory board of a local polytechnic (HiOA), I listened to the teaching staff complaining about the low number of qualified students entering the Computer Science programmes. I then realized that the focus on user-friendliness in computing has accidentally prevented the children from discovering that they can actually program the computers that surround them. And even though we are surrounded by more computers than ever before, there is less CS-related teaching in the primary and high schools than when I grew up!
When I was young, the only way of making my Commodore 64 do fun stuff was to program it. Being a son of a widowed librarian, I couldn't afford the expensive games on sale. So I had to type in the game listings in magazines. As a consequence, I discovered that I had some talent in programming, and later on pursued a career in computing. As did many of my contemporaries. Now there is a dire shortage of new talent in the business.
all, the Minister of Government Administration, Reform and Church Affairs gave a talk at the kickoff meeting, which was live streamed to hundreds of people in their homes and to peer meetings in the other major cities. After five weeks!
Fast-forward to february 2013. The code.org video had made its mark, and I had been pondering for a while about the idea of establishing a local Meetup aimed at teaching the kids in Oslo to code. I had mentioned it to my partners on the eastern board of the Norwegian Computer Society. So I casually responded to a tweet by Olve Maudal, and challenged him to join me in making a programming course for children. He responded favourably, as did many others. I turned to Johannes Brodwall to help me establish a meetup group. And unwittingly turned on the fire hose!
The voluntary organizations in Great Britain have shown the way in this field. Computing at School (Cas), Code Club, and Raspberry Pi foundation have already made a massive impact on the computer literacy of British children. Through our network, we have had the privilege of reaching the front persons involved, We recently had a phone meeting with Simon Peyton-Jones of CaS - who you may also know as a major contributor to the Haskell functional language. Linda Sandvik of Code Club allows us to translate and use all of their material, and we have only just started to explore how being connected to CaS can accelerate the process of introducing CS in Norwegian Schools. BTW, the good people at Codeacademy.org of the U.S. kindly let us translate their material, too!
Just like in the U.S, people and companies came in hordes. The first month was an unreal experience. The leaders in the developer communities and some members of academia in Norway all tweeted to make their followers join in, and soon enough we had sister meetups in the major Norwegian cities. Also the industry journalists, notably Eirik Rossen of digi.no, chimed in, and called to action. Then came Torgeir Waterhouse, director of Internet at ICT Norway. The two of us hit it off, and decided to run this project together. Lots of companies - personally represented by their CEO´s - stated that they fully supported this. And to top it
We have >650 members in nine cities, and tens of schools already busy introducing programming in their curriculum. The "inner circle" of the project counts some 80 persons in several working groups. We are a movement of doers, not bureaucrats. "Organizations organize, movements move". So we are currently focusing in five areas: To translate and establish teaching material, arrange local meetings for children and their parents, organizing a network of teachers and schools, establishing local code clubs, and building a website that allows people to find each other and seek the assistance they need.
One of the coolest things happening this spring, is the introduction of the “kodeklubben” section of NDC - we will have a special evening on june 11th with the young and their parents being introduced to coding. I fully appreciate that I am just an ordinary tech bloke who accidentally triggered this - it was an explosion waiting to happen. There are many brilliant individuals and organizations who already have done a lot of work in the field. What we have done is to connect them. So recently, I have had the privilege of engaging fantastic people who I would never have dreamed of working with in my day job. This makes me and Torgeir filled with appreciation and humility, at the same time as we are bent on steering this project through. I feel privileged to have an employer - Bouvet ASA - who has been very patient with me - effectively sponsoring a favor to society. One of the coolest things happening this spring, is the introduction of the “kodeklubben” section of NDC - we will have a special evening on june 11th with the young and their parents being introduced to coding. Even cooler: Linda Sandvik – who co-founded code-club is coming over to join us! In conclusion: Go seek up your local primary school and offer your services - or help establishing an afterschool code club for the children in your neighborhood. If you happen to be in Norway, you can find us at www. kidsakoder.no. There you should find helpful persons, how-to´s and teaching material that quickly gets you going.
Simen Sommerfeldt started out in telecom, ending up in Ericsson after a few acquisitions. He has been with Bouvet Oslo for ten years now, where he used to run the Java department and is now finding his way as CTO. He lives just south of Oslo, and has three children aged six to twelve - whom he tries to introduce to coding. His wife and their dog sometimes feel left out of the family coding community.
9
THE POWER OF
10
Š Shutterstock
RabbitMQ
RabbitMQ is a messaging and queueing server that can help you scale your applications in many different ways. RabbitMQ might be the tool you are looking for if you need to scale your system processing power during peak time and shrink it down when traffic goes down. What if you need to integrate many disparate services, probably living in different networks and implemented in different programming languages? These kinds of use cases and many more are a perfect fit for RabbitMQ. By Alvaro Videla
RabbitMQ is a messaging and queueing server that can help you scale your applications in many different ways. RabbitMQ might be the tool you are looking for if you need to scale your system processing power during peak time and shrink it down when traffic goes down. What if you need to integrate many disparate services, probably living in different networks and implemented in different programming languages? These kinds of use cases and many more are a perfect fit for RabbitMQ. Let's step back for a second and review the concepts introduced in the previous paragraph. Let's start with messaging. Messaging is used to move data between processes. There are processes producing data, and processes consuming that data. The difference between messaging and Remote Procedure Calls is that communication is usually asynchronous and event-based. We may ask: "why don't you just open a socket?". The difference is that when we use messaging and a messaging server we are delegating a lot of complexity to the message broker, while when we "just open a socket", we need to perform a lot of bookkeeping: making sure messages are not lost, keeping a list of peer addresses, ensuring that messages are consumed on the other side of the socket, and so on. By using a messaging broker like RabbitMQ, those problems are taken care for us. I've said that messaging architectures are event based, which allow our architectures to be easily decoupled and more adaptable to future changes. How? Let's consider the simple example of an image uploader. %% image_controller handle('PUT', "/user/image", ReqData) -> image_handler:do_upload(ReqData:get_file()), ok.
In that previous snippet we create a controller for the method PUT to the resource "/user/image". Whenever we get a new image upload, the image_handler will store it on the hard drive and insert the image information on the database. Finally our controller just returns ok.
What would happen to that code if we need to resize the images after they are uploaded? We will have to revisit that code, add a line to call the image resizer and redeploy our application: handle('PUT', "/user/image", ReqData) -> ImgData = image_handler:do_upload(ReqData:get_file()), resize_image(ImgData), ok.
If we need a second change, say, notify the user’s friends about her upload, then we will need to add one more line to our controller and redeploy again. It's clear this doesn't scale to new requirements. Also if we need to scale up our image resizing speed we will need to fire more application instances (with all its logic) when we just need to scale the image resizer alone. Not cool. Now let's consider what would happen if we used a messaging oriented architecture? Our controller will look like this: handle('PUT', "/user/image", ReqData) -> {ok, Image} = image_handler:do_upload(ReqData:get_file()), Msg = #msg{user = ReqData:get_user(), image = Image}, publish_message('new_image', Msg), ok.
We process the image, then we create a Msg structure (think a JSON object) and we publish an event stating that there's a new image in the system. If someone is interested in further processing that image, that's fine, but our image controller is only concerned with handling the image upload. Think about the user that wanted to upload her image. Before she had to wait for our resizer to complete in order to continue browsing. Now as soon as the image is stored, then our controller will return a confirmation to her. Thanks to our messaging oriented architecture we can setup separate consumers (or event listeners if you will) that will take care of further processing the image, or to react accordingly (like notifying user’s friends). One such process could have the following function:
11
on('new_image', Msg) -> resize_image(Msg.image).
We can launch as many of those processes as we need according to our processing needs. Of course for simplicity’s sake we don't include the code that will instantiate a connection to our message broker and so on. Let's say later we also need to award points to the user for her image uploads. We just code our add_points service and start a new process like this: on('new_image', Msg) -> add_points(Msg.user).
This new process will also listen for the new_image events and react accordingly. Later our product owners decide that we are going to remove the awards feature from our website. In our non messaging oriented solution we would have to redeploy our code again. In our new version we just need to stop the process that's awarding points to the user. The controller won't notice the difference, neither will the other event listeners or consumers. Also the application won't see any downtime.
The story doesn't end there. Remember that I said that in messaging we have producers and consumers. What happens if the consumer is offline? We don't want our messages to be discarded because the consumer is not present. With RabbitMQ that is solved by using queues. For each kind of task we will have a queue where messages will go and be queued. In our example a queue would be called image_resize, and another one would be called points. How do the messages reach those queues if we publish them to our new_image exchange? In RabbitMQ we bind queues to exchanges, an operation that is similar to saying: "I'm interested in these kind of events –new images–, please send me a copy of them whenever there are new ones". In our example the exchange could be of the type fanout where every message will be routed to all the queues bound to it. Don't worry, RabbitMQ is smart enough to keep only one copy of the message while at the same time making it present in all those queues. The following figure shows our messaging topology for our image uploader example, with one producer (in green) sending messages to the new_image exchange (in red). The messages are fanout’ed to two queues (in blue) and then they reached their respective consumers.
Now this might seem a bit magical, but actually all this is possible in our case thanks to the fact that we are using a message broker like RabbitMQ. What is the broker providing for us in this case? First of all it will provide routing. When we publish our event, we will send it to an address in the broker, say new_image, which in RabbitMQ concepts is called an exchange. In RabbitMQ you publish messages to an Exchange created by the application.
Learn how to become a great ScrumMaster
S C RU M Y MASTER ad er sh ip Se rv an t Le d To G re at Fr om G oo
SCRUM MASTERY: FROM GOOD TO GREAT SERVANT-LEADERSHIP In his over ten years of coaching numerous Scrum teams, the highly-respected and experienced Scrum coach Geoff Watts has identified patterns that separate a good ScrumMaster from a great one. In this book, he not only illustrates these patterns through stories of his own experiences and those of the many Scrum teams he has encountered but offers practical guidance for you on your own path to greatness. Scrum Mastery is for practicing ScrumMasters who want to develop themselves into a great servant-leader capable of taking their teams beyond simple process compliance.
AT T S GEOFF W F
hn Co ke erby Mi by ther D s s rd wo and E ore
Available on amazon.com from 1st June 12
Mike Cohn, in his foreword for the book, said: "Most books rehash well-trod territory and I don’t finish them any wiser. I am positive I will be referring back to this book for many years" Roman Pichler said: "I am thoroughly impressed with how comprehensive and well-written the book is. It will be indispensable for many people"
What happens if we get more messages than what we can process? They will start to pile up in our queues and image processing speed will drop considerably unless we do something. Here comes another advantage from queueing: we can start more processes, say image resizers, run them in parallel and increase our message processing speeds. RabbitMQ will do all the heavy lifting for us, taking care of delivering the messages in a round robin fashion. What happens if a worker dies? The message goes back to the queue and will be ready for the next consumer to process it. To enable this behavior in RabbitMQ is as easy as telling the broker when our process starts consuming that it will acknowledge the processing of each message. If the consumers dies before the ack is sent, the message will be back in the queue. It's worth noting that if the new image ingress rates decreases we can stop consumers accordingly saving precious CPU and energy resources. Now let's say we implemented our image resizer in language _X_ and that doesn't cut it anymore and we want to rewrite it in language _Y_. What can we do? If we were using our own solution we would probably need to rewrite quite a bit of our code to be able to interoperate with the new programming language. On the other hand, when we "talk" to RabbitMQ we do it by using the AMQP protocol. Clients for AMQP exist in many languages: Java, .Net, PHP, Javascript, Python, Ruby, etc. What we can do here is to just rewrite the image resizing bits in language Y and then use AMQP to interoperate with the rest of our app.
MQTT for example is a very lightweight protocol used for M2M communication for example in the "Internet of Things" where you need to interconnect very small devices. As we can see having a messaging oriented architecture can bring huge benefits to our application, making it truly decoupled, easy to scale –up and down–, and adaptable to new requirements. We can build an app where technology changes like new programming languages are not a big deal. RabbitMQ is an open source messaging broker that can let us enjoy the benefits of messaging right now. If you want to learn more about RabbitMQ please visit the website: http://www.rabbitmq.com and follow the introductory tutorials that will get you up to speed right away: http://www.rabbitmq.com/getstarted.html [1] http://blogs.vmware.com/vfabric/2013/02/choosing-your-messaging-protocol-amqp-mqtt-or-stomp.html
Alvaro Videla works as Developer Advocate for Cloud Foundry. Before moving to Europe he used to work in Shanghai where he helped building one of Germany biggest dating websites. He also co-authored the book RabbitMQ in Action".
Finally if we need to communicate with RabbitMQ from a platform where there's no AMQP client we still have choices since RabbitMQ is a multi protocol broker. That is, we can talk to it by using AMQP, MQTT and STOMP [1].
v
Book your hotel room today! If you need accomodation during the conference, you should book your hotel room now. June is one of Oslo´s most busy months, so there might be a shortage of hotel rooms. NDC have reserved a number of rooms for your convenience during the conference period. Radisson Blu Plaza / Thon Opera / Thon Spektrum / Thon Astoria / Thon Terminus
Please visit ndcoslo.com for more information!
ndcoslo.com
13
14
Š Shutterstock
AC
T
A HEURISTIC FOR FORMATTING CODE according to the AAA pattern
The Arrange Act Assert (AAA) pattern is one of the most fundamental and important patterns for writing maintainable unit tests. It states that you should separate each test into three phases (Arrange, Act, and Assert). By Mark Seemann
Like most other code, unit tests are read more than they are written, so it’s important to make the tests readable. This article presents one way to make it easy for a test reader easily to distinguish the three AAA phases of a test method. THE WAY OF AAA The technique is simple: • As long as there are less than three lines of code in a test, they appear without any special formatting or white space. • When a test contains more than three lines of code, you separate the three phases with a blank line. • When a single phase contains so many lines of code that you’ll need to divide it into subsections to make it readable, you should explicitly mark the beginning of each phase with a code comment.
E G N A ARR
R ASSE
T
This way avoids the use of comments until they are unavoidable; at that time, you should consider whether the need for a comment constitutes a code smell. MOTIVATING EXAMPLE Many programmers use the AAA pattern by explicitly demarking each phase with a code comment, see below. Notice the use of code comments to indicate the beginning of each of the three phases. Given an example like the snippet below, this seems like a benign approach, but mandatory use of code comments starts to fall apart when tests are very simple.
[Fact] public void UseBasketPipelineOnExpensiveBasket() { // Arrange var basket = new Basket( new BasketItem("Chocolate", 50, 3), new BasketItem("Gruyère", 45.5m, 1), new BasketItem("Barolo", 250, 2)); CompositePipe<Basket> pipeline = new BasketPipeline(); // Act var actual = pipeline.Pipe(basket); // Assert var expected = new Basket( new BasketItem("Chocolate", 50, 3), new BasketItem("Gruyère", 45.5m, 1), new BasketItem("Barolo", 250, 2), new Discount(34.775m), new Vat(165.18125m), new BasketTotal(825.90625m)); Assert.Equal(expected, actual); }
15
Consider this Structural Inspection test: [Fact]
Even if you have a test that you can properly divide into the three distinct AAA phases, you don’t need comments or formatting if it’s only three lines of code:
public void SutIsBasketElement() {
// Arrange // Act?
var sut = new Vat(); // Assert }
Assert.IsAssignableFrom<IBasketElement>(sut);
[Theory]
[InlineData("", "", 1, 1, 1, 1, true)]
[InlineData("foo", "", 1, 1, 1, 1, false)] [InlineData("", "bar", 1, 1, 1, 1, false)]
[InlineData("foo", "foo", 1, 1, 1, 1, true)]
[InlineData("foo", "foo", 2, 1, 1, 1, false)] [InlineData("foo", "foo", 2, 2, 1, 1, true)]
Notice the question mark after the // Act comment. It seems that the writer of the test was unsure if the act of creating an instance of the System Under Test (SUT) constitutes the Act phase.
[InlineData("foo", "foo", 2, 2, 2, 1, false)] [InlineData("foo", "foo", 2, 2, 2, 2, true)] public void EqualsReturnsCorrectResult( string sutName,
string otherName, int sutUnitPrice,
You could just as well argue that creating the SUT is part of the Arrange phase:
int otherUnitPrice, int sutQuantity,
int otherQuantity,
[Fact]
public void SutIsBasketElement() {
{
// Arrange
new BasketItem(otherName, otherUnitPrice, otherQuantity));
// Act
}
Assert.IsAssignableFrom<IBasketElement>(sut);
However, now the Act phase is empty. Clearly, using code comments to split two lines of code into three phases is not helpful to the reader. THREE LINES OF CODE AND LESS Here’s a simpler alternative: [Fact]
var sut = new BasketItem(sutName, sutUnitPrice, sutQuantity); var actual = sut.Equals(
var sut = new Vat(); // Assert
bool expected)
}
Assert.Equal(expected, actual);
Three lines of code, and three phases of AAA; I think it’s obvious what goes where – even if this single test method captures eight different test cases. SIMPLE TESTS WITH MORE THAN THREE LINES OF CODE When you have more than three lines of code, you’ll need to help the reader identify what goes where. As long as you can keep it simple, I think that you accomplish this best with simple whitespace:
public void SutIsBasketElement() {
}
var sut = new Vat();
Assert.IsAssignableFrom<IBasketElement>(sut);
[Fact]
public void UseBasketPipelineOnExpensiveBasket() {
var basket = new Basket(
new BasketItem("Chocolate", 50, 3),
When there’s only two lines of code, the test is so simple that you don’t need help from code comments. If you wanted, you could even reduce that test to a single line of code, by inlining the sut variable:
new BasketItem("Gruyère", 45.5m, 1), new BasketItem("Barolo", 250, 2));
CompositePipe<Basket> pipeline = new BasketPipeline();
[Fact]
var actual = pipeline.Pipe(basket);
{
var expected = new Basket(
public void SutIsBasketElement()
}
new BasketItem("Chocolate", 50, 3),
Assert.IsAssignableFrom<IBasketElement>(new Vat());
new BasketItem("Gruyère", 45.5m, 1), new BasketItem("Barolo", 250, 2), new Discount(34.775m),
Such a test is either a degenerate case of AAA where one or more phase is empty, or else it doesn’t really fit into the AAA pattern at all. In these cases, code comments are only in the way, so it’s better to omit them.
16
new Vat(165.18125m),
new BasketTotal(825.90625m));
}
Assert.Equal(expected, actual);
This is the same test as in the motivating example, only with the comments removed. The use of whitespace makes it easy for you to identify three phases in the method, so comments are redundant. As long as you can express each phase without using whitespace within each phase, you can omit the comments. The only whitespace in the test marks the boundaries between each phase. COMPLEX TESTS REQUIRING MORE WHITESPACE If your tests grow in complexity, you may need to divide the code into various sub-phases in order to keep it readable. When this happens, you’ll have to resort to using code comments to demark the phases, because use of only whitespace would be ambiguous:
As Tim Ottinger described back in 20061 , code comments are apologies for not making the code clear enough. A code comment is a code smell, because it means that the code itself isn’t sufficiently self-documenting. This is also true in this case. Whenever I need to add code comments to indicate the three AAA phases, an alarm goes off in my head. Something is wrong; the test is too complex. It would be better if I could refactor either the test or the SUT to become simpler. When TDD’ing, I tend to accept the occasional complicated unit test method, but if I seem to be writing too many complicated unit tests, it’s time to stop and think.
[Fact]
public void PipeReturnsCorrectResult() {
// Arrange
var r = new MockRepository(MockBehavior.Default)
{
};
DefaultValue = DefaultValue.Mock
var v1Stub = r.Create<IBasketVisitor>(); var v2Stub = r.Create<IBasketVisitor>(); var v3Stub = r.Create<IBasketVisitor>(); var e1Stub = r.Create<IBasketElement>(); var e2Stub = r.Create<IBasketElement>();
e1Stub.Setup(e => e.Accept(v1Stub.Object)).Returns(v2Stub.Object);
SUMMARY In Growing Object-Oriented Software, Guided by Tests, one of the most consistent pieces of advice is that you should listen to your tests. If your tests are too hard to write; if your tests are too complicated, it’s time to consider alternatives. How do you know when a test has become too complicated? If you need to added code comments to it, it probably is. http://butunclebob.com/ArticleS.TimOttinger. ApologizeIncode 1
e2Stub.Setup(e => e.Accept(v2Stub.Object)).Returns(v3Stub.Object); var newElements = new[]
{
r.Create<IBasketElement>().Object, r.Create<IBasketElement>().Object,
};
r.Create<IBasketElement>().Object,
v3Stub
.Setup(v => v.GetEnumerator())
.Returns(newElements.AsEnumerable().GetEnumerator()); var sut = new BasketVisitorPipe(v1Stub.Object); // Act
var basket = new Basket(e1Stub.Object, e2Stub.Object); Basket actual = sut.Pipe(basket); // Assert }
Assert.True(basket.Concat(newElements).SequenceEqual(actual));
Mark Seemann is a Danish programmer based in Copenhagen, Denmark. His professional interests include object–oriented development, functional programming, and software architecture, as well as software development in general.
In this example, the Arrange phase is so complicated that I’ve had to divide it into various sections in order to make it just a bit more readable. Since I’ve had to use whitespace to indicate the various sections, I need another mechanism to indicate the three AAA phases. Code comments is an easy way to do this.
17
Application-scale JavaScript development is not for the faint-hearted. It requires know-how, experience and mastery of JavaScript â&#x20AC;&#x201C; at least if you're planning to do it well.
TypeScript By Torstein Nicolaysen
TypeScript is another language to emerge in the recent trend of languages that compile to JavaScript, and aims to aid you in building maintainable large-scale JavaScript applications. It can ease the development and lower the threshold for writing high quality large-scale applications. TypeScript is still in preview, but makes a strong impression and gives you the future of JavaScript - today! JavaScript is being used for everything nowadays. The usage has exploded, but developers are still struggling with building large-scale applications in an orderly fashion. The excellent article on CoffeeScript in the previous issue made the same point. With JavaScript it seems inevitable to end up with something but a bowl of spaghetti. MEET TYPESCRIPT TypeScript is being designed by Anders Hejlsberg, the lead architect of C#. It is mainly about two things: optional typing and next-generation JavaScript syntax. One of the key decisions is to align TypeScript with EcmaScript 6 (currently a draft). Several of the mature features in the new EcmaScript standard are already implemented in TypeScript. This means that you don't have to wait for the next version of JavaScript to arrive.
18
TypeScript is still under active development. Generics are one of the next big features to be released. The road map also shows plans for better ES6 compatibility, mixins, async/await and protected accessibility. WHY YOU SHOULD LOOK INTO TYPESCRIPT TypeScript is a typed super set of JavaScript. This is different from CoffeeScript, which is an abstraction that borrows syntax and paradigms from other languages like Ruby and Python. This makes adopting TypeScript easier. With it's optional typing, you can gradually introduce typing into your existing codebase. For those new to JavaScript development, TypeScript can make it easier to be productive. Many have trouble learning the prototype-based programming paradigm in JavaScript. TypeScript addresses this by implementing concepts from object-oriented programming familiar to C# and Java developers. If you're using TypeScript with typing, a modern IDE will help you do safe refactoring. And - don't worry about debugging, you can use source maps to map between the generated code and your TypeScript.
Š Shutterstock
Torstein works as a consultant for BEKK, and have been working with web related technologies for over 10 years. He has written several largeâ&#x20AC;&#x201C;scale JavaScript applications.
19
TYPESCRIPT IN A NUTSHELL Type annotations Optional typing is a clever design choice, which allows for gradual introduction of TypeScript to a Javascript codebase. You can mix and match static and dynamic typing in the application, allowing you to choose the right solution for the problem at hand. Arrow functions Simply a shorthand for creating functions with lexical scope binding that lets you write elegant and concise code. When a callback function is called in regular JavaScript, this will refer to the context of the callee sometimes an annoyance that needs a workaround. Lexical binding solves that. Modules Splitting functionality into modules is a good way to encapsulate and organize an application - two key practices for doing proper application-scale JavaScript development. You can think of modules like namespaces in C# and packages in Java.
Classes and interfaces With these familiar concepts it's easier to define objects and encapsulate code. TypeScript provides constructors, inheritance and public/private accessibility. The syntax closely aligns with EcmaScript 6, but TypeScript has some additions. Interfaces are currently not a part of EcmaScript 6, but is a useful contribution to TypeScript.
Parameters TypeScript gives you default, optional and rest parameters. These are merely syntactic sugar, but lets you create flexible signatures and keeps the code clean. EcmaScript 6 includes default and rest parameters, but not optional parameters.
20
Module loading Managing dependencies in plain JavaScript is hard. Require.js makes it bearable, but with TypeScript you get a flexible and elegant way of defining explicit dependencies. It works with both CommonJS (used in Node) or AMD (used in require.js) based on a compiler option. Definition files With definition files, you introduce typing to existing JavaScript. The definition file contains the signatures for the library, and lives side-by-side with the original library file. An IDE will also give you code completion on these libraries. Definition files are already available for many popular frameworks thanks to an active community. Tool support Considering that it's still a preview, the tools support for TypeScript is impressive. It's currently supported
in Visual Studio 2012, IntelliJ IDEA, WebStorm, PhpStorm, Sublime Text, Emacs, Vim and Cloud9. CONCLUSION TypeScript can make it easier to do application-scale JavaScript development - both for newcomers and experienced developers. With types you can easily prove correctness of a large application and confidently let large teams work on the same codebase. It doesn't fix JS and all its quirks, but it makes it easier to use it for general programming.
Software Development Oslo, Kjeller, Stavanger, Bergen
Schlumberger is a knowledge based company in the energy sector, with world class centers for Software Development located here in Norway. We have offices in Oslo, Kjeller, Bergen and Stavanger, responsible for delivering market leading software platforms for the oil-and gas industry. With more than 200 software engineers, we are a leading employer and competency center for software development in Norway. Our activities cover a wide range of activities, from software engineering to customer support, from technical documentation to quality assurance. We take great pride in our internationally diverse environment and our ability to consistently build innovative and integrated products with high quality. Schlumberger closely collaborates with Universities and Research Centers in Norway and throughout the rest of the world. All employees benefit from tailored and continuous training programs, state-of-the-art communication technologies and a global organization that supports knowledge sharing and team work. We encourage patent applications, conference participation and journal publications. For job opportunities, please contact sntc-recruiting@slb.com
The community is not as large or mature as the one around CoffeeScript, but it's growing. A notable community contribution is the TypeScript Definition Package Manager that makes it easier to manage definition files for your project. TypeScript is definitely worth checking out, but remember it is still a preview. Join my session at NDC if you want to learn more. Hope to see you there.
22
Š Shutterstock
But what does it all mean?
Back when I used to post on newsgroups I would frequently be in the middle of a debate of the details of some behaviour or terminology, when one poster would say: “You’re just quibbling over semantics” as if this excused any and all previous inaccuracies. I would usually agree – I was indeed quibbling about semantics, but there’s no “just” about it. By Jon Skeet
Semantics is meaning, and that’s at the heart of communication – so for example, a debate over whether it’s correct to say that Java uses passby-reference[1] is all about semantics. Without semantics, there’s nothing to talk about. This has been going on for years, and I’m quite used to being the pedant in any conversation when it comes to terminology – it’s a topic close to my heart. But over the years – and importantly since my attention has migrated to Stack Overflow, which tends to be more about real problems developers are facing than abstract discussions – I’ve noticed that I’m now being picky in the same sort of way, but about the meaning of data instead of terminology. DATA UNDER THE MICROSCOPE When it comes down to it, all the data we use is just bits – 1s and 0s. We assemble order from the chaos by ascribing meaning to those bits… and not just once, but in a whole hierarchy. For example, take the bits 01001010 00000000: • Taken as a little-endian 16-bit unsigned integer, they form a value of 74. • That 16-bit unsigned integer can be viewed as a UTF-16 code unit for the character ‘J’. • That character might be the first character within a string. • That string might be the target of a reference, which is the value for a field called “firstName”. • That field might be within an instance of a class called “Person”. • The instance of “Person” whose “firstName” field has a value
which is a reference to the string whose first character is ‘J’ might itself be the target of a reference, which is the value for a field called “author”, within an instance of a class called “Article”. • The instance of “Article” whose “author” field (fill in the rest yourself…) might itself be the target of a reference which is part of a collection, stored (indirectly) via a field called “articles” in a class called “Magazine”. As we’ve zoomed out from sixteen individual bits, at every level we’ve imposed meaning. Imagine all the individual bits of information which would be involved in a single instance of the Magazine with a dozen articles, an editorial, credits – and perhaps even images. Really imagine them, all written down next to each other, possibly without even the helpful gap between bytes that I included in our example earlier. That’s the raw data. Everything else is “just” semantics. SO WHAT DOES THAT HAVE TO DO WITH ME? I’m sure I haven’t told you anything you don’t already know. Yes, we can impose meaning on these puny bits, with our awesome developer power. The trouble is that bits have a habit of rebelling if you try to impose the wrong kind of meaning on them… and we seem to do that quite a lot. The most common example I see on Stack Overflow is treating text (strings) and binary data (image files, zip files, encrypted data) as if they were interchangeable. If you try
to load a JPEG using StreamReader in .NET or FileReader in Java, you’re going to have problems. There are ways you can actually get away with it – usually by using the ISO-8859-1 encoding – but it’s a little bit like trying to drive down a road with a broken steering wheel, only making progress by bouncing off other obstacles. While this is a common example, it’s far from the only one. Some of the problems which fall into this category might not obviously be due to the mishandling of data, but at a deep level they’re all quite similar: • SQL injection attacks due to mingling code (SQL) with data (values) instead of using parameters to keep the two separate. • The computer getting arithmetic “wrong” because the developer didn’t understand the meaning of floating binary point numbers, and should actually have used a floating decimal point type (such as System.Decimal or java.math. BigDecimal). • String formatting issues due to treating the result of a previous string formatting operation as another format string – despite the fact that now it includes user data which could really have any kind of text in it. • Double-encoding or double-unencoding of text data to make it safe for transport via a URL. • Almost anything to do with dates and times, including – but certainly not limited to – the way that java.util.Date and System. DateTime values don’t inherently have a format. They’re just values.
23
The sheer bulk of questions which indicate a lack of understanding of the nature of data is enormous. Of course Stack Overflow only shows a tiny part of this – it doesn’t give much insight into the mountain of code which handles data correctly from the perspective of the types involved, but does entirely inappropriate things with those values from the perspective of the intended business meaning of those values. It’s not all doom and gloom though. We have some simple but powerful weapons available in the fight against semantic drivel. TYPES This article gives a good indication of why I’m a fan of statically typed languages. The type system can convey huge amounts of information about the nature of data, even if the business meaning of values of those types can be horribly overloaded. Maybe it would be good if we distinguished between human-readable text which should usually be treated in a culture-sensitive way, and machine-parsable text which should usually be treated without reference to any culture. Those two types might have different operations available on them, for example – but it would almost certainly get messy very quickly. For business-specific types though, it’s usually easy to make sure that each type is really only used for one concept, and only provides operations which are meaningful for that concept.
24
carry no useful information whatsoever. Often simply the act of naming something can communicate meaning. Don’t be afraid to extract local variables in code just to clarify the meaning of some otherwise-obscure expression. DOCUMENTATION I don’t think I’ve ever met a developer who actually enjoys writing documentation, but I think it’s hard to deny that it’s important. There tend to be very few things that are so precisely communicated just by the names of types, properties and methods that no further information is required. What is guaranteed about the data? How should it be used – and how should it not be used?
MEANINGFUL NAMES Naming is undoubtedly hard, and I suspect most developers have had the same joyless experiences that I have of struggling for ten minutes to come up with a good class or method name, only to end up with the one we first thought of which we really don’t like… but which is simply better than all the other options we’ve considered. Still, it’s worth the struggle.
The form, style and detail level of documentation will vary from project to project, but don’t underestimate its value. Aside from behavioural details, ask yourself what meaning you’re imposing on or assuming about the data you’re dealing with… what would happen if someone else made different assumptions? What could go wrong, and how can you prevent it by expressing your own understanding clearly? This isn’t just important for large projects with big teams, either – it’s entirely possible that the person who comes to the code with a different viewpoint is going to be you, six months later.
Our names don’t have to be perfect, but there’s simply no excuse for names such as “Form1” or “MyClass” which seem almost designed to
CONCLUSION I apologise if this all sounds like motherhood and apple pie. I’m not saying anything new, after all – of
course we all agree that we need to understand the meaning of the our data. I’m really not trying to waste your time though: I want you to take a moment to appreciate just how much it matters that we understand the data we work with, and how many problems can be avoided by putting effort into communicating that effectively and reading what others have written about their data. There are other approaches we can take beyond those I’ve listed above, too – much more technically exciting ones – around static code analysis, contracts, complicated annotations and the like. I have nothing against them, but just understanding the value of semantics is a crucial starting point, and once everyone on your team agrees on that, you’ll be in a much better position to move forward and agree on the meaning of your data itself. From there, the world’s your oyster – if you see what I mean. Reference [1] It doesn’t; references are passed by value, as even my non-programmer wife knows by now. That’s how often this myth comes up.
Jon Skeet is a Java developer for Google in London, but he plays with C# (somewhat obsessively) in his free time.
Š Monkey Business Images/Shutterstock
It`s time to introduce your kid to coding
Join the Code Club Workshop at NDC! Kodeklubben teaches Norwegian kids how to code on top of Oslo Plaza, 11th of June from 17.30-20.30. The event is suitable for juniors aged 8- 18 years. The aim of the workshop is to provide the attendees with enough skills to continue coding at home. Bring your kids and come. First come, first served. Sign up at www.ndcoslo.com/codingforkids
SEARCH
in SharePoint 2013
26
© Shutterstock
By Helge Grenager Solheim and Jørgen Vinne Iversen
© Sh
Fast Search & Transfer was acquired by Microsoft in 2008. This was the second largest Microsoft acquisition and got Microsoft to the Gartner magic quadrant for enterprise search. FAST at that time was a search engine for high-end customers: Customers would typically have millions of items, hundreds of search queries per second, highly customized solutions and specialized search admins in their IT organization. Today, the FAST search engine powers SharePoint 2013 and all types of customers. A significant shift in the industry is the move from local IT-driven solutions to cloud solutions, reducing the overall life-time cost of developing and running solutions. Now the FAST search engine is shipped as part of SharePoint 2013, and immediately usable right out of the box without extensive customization. IT operation is simple, like the rest of SharePoint, and does not require specialized search skills. With Office 365, Microsoft offers its best enterprise search engine at a low cost to everyone, hosted by Microsoft in the cloud. Luckily, for developers, there are even more opportunities than before to create exciting search solutions. Developers can build advanced search-driven solutions and deploy them to Office 365 for their customer without having to worry about IT operations. We’re all in, are you? NEW OUT-OF-THE-BOX SEARCH UX EXPERIENCES We’ve designed a whole new search experience for SharePoint 2013, with the FAST search engine at the core. The default enterprise search center contains a search box (naturally) and a number of search verticals providing specialized experiences, e.g. video search. Refiners allow you to drill into your search results to look for results of particular types (Excel/PDF/web pages), from particular authors, within specific time periods or with any associated metadata. The user experience is all built on industry standards such as HTML5/JavaScript/CSS, and the search center is built up of Search Web Parts that contain display templates, framed in Figure 1:
Figure 1
Every colored box in Figure 1 is one display template, which is customizable by you. On the left-hand side there are two display templates for regular list refiners and below them a histogram refiner. In the middle we have the main results, with different display templates for Word documents, PowerPoint presentations, web pages, and any other document types you have. Display templates to use for any given search results are selected based on a rule set, which we call Result Types, and is often based on the document type, Word for example. For each result type there is a hover card which appears when you hover over the result. The hover card displays a preview of the result, number of times the document has been opened, as well as other information such as the main sections and actions like follow, edit and view library. The latter opens the folder that result is stored in, to allow the user to find similar items there.
27
Notice that in Figure 1 there is a result block with Power Point presentations just below the search box. This is because the query contains the action word deck, which triggers a built-in query rule to interpret that the user intends to find presentations instead of documents containing the word deck. The query rule also blends in a result block containing results of a particular type, in this case PowerPoint presentations. Customizing the search experience, creating similar query rules that add results, promote results to the top and rewrite queries is a great way of improving the search experience without coding. In addition, the new Content Search web part allows you to present search-driven content in a flexible and full fidelity manner. SEARCH-DRIVEN EXPERIENCES AND SEARCH APPS In SharePoint 2013, search surfaces in a number of new areas: searching in document libraries and lists is now fully search driven, for community sites and newsfeeds search drives the linking of hash tags. There is also a concept of tags, documents, sites and people you are following. Based on what type you are already following, a new analytics processing engine provides suggestions for other similar tags, documents, sites and people to follow. CUSTOMIZING THE EXPERIENCE How can you as a developer leverage the new search components and build your own solutions for SharePoint 2013? Let’s work through an example where we increase the visibility of document ratings in search. In SharePoint document libraries, all items have ratings as shown below:
These ratings are stored in the search index as a retrievable managed property AverageRating. What we want to do is to visualize this rating along with each word document in the result set. To do that, we change the display template for Word documents to include a visualization of the AverageRating managed property from the index. Best practice is to copy and modify the original display template, which you can find in the Master Page gallery under the Display Templates/Search/ folder. Remember to give your new display template a different title before editing. At the top of the display template, modify the ManagedPropertyMapping to include the AverageRating managed property. This tells the display template to retrieve the contents of this managed property when issuing a query: <mso:ManagedPropertyMapping msdt:dt="string">'Ave rageRating':'AverageRating','Title':'Title','Path ':'Path','Description':'Description','EditorOWSUS ER':'EditorOWSUSER','LastModifiedTime':'LastModifie dTime','CollapsingStatus':'CollapsingStatus','Doc Id':'DocId','HitHighlightedSummary':'HitHighlight edSummary','HitHighlightedProperties':'HitHighlig htedProperties','FileExtension':'FileExtension','V iewsLifeTime':'ViewsLifeTime','ParentLink':'Parent Link','FileType':'FileType','IsContainer':'IsConta iner','SecondaryFileExtension':'SecondaryFileExten sion','DisplayAuthor':'DisplayAuthor','ServerRedir ectedURL':'ServerRedirectedURL','SectionNames':'Se ctionNames','SectionIndexes':'SectionIndexes','Ser verRedirectedEmbedURL':'ServerRedirectedEmbedURL', 'ServerRedirectedPreviewURL':'ServerRedirectedPreviewURL'</mso:ManagedPropertyMapping>
Then include a JavaScript file with a function that takes care of rendering the contents of the AverageRating managed property:
Helge is a program manager lead in the Microsoft FAST Information Experiences team, focusing on various aspects of search including user experience, security trimming, search schema, indexing structures, performance and fault tolerance.
28
Jørgen is a developer lead in the Microsoft FAST Information Experiences team, where he works on user experience, search-driven scenarios and relevance. In the past Jørgen has worked on enabling Internet business search scenarios for SharePoint 2013, FAST Search for SharePoint 2010, FAST ESP and FAST's short-lived music streaming service, Ezmo.
<script> $includeScript(this.url, "~sitecollection/_catalogs/masterpage/Display Templates/Search/StarRating.js"); </script>
This is the contents of the StarRating.js file function RatingHtml(averageRating) { var ratingHtml = '<div class="starRating">'; var iAvg = parseInt(averageRating); var fAvg = parseFloat(averageRating); for(var i=0; i<iAvg; i++) // Full stars { ratingHtml+='<img src="/_layouts/15/images/RatingsSmallStarFilled.png"/>'; } if(iAvg<5) { if((fAvg-iAvg)>0) // Half star { ratingHtml+= '<img src="/_layouts/15/images/RatingsSmallStarLeftHalfFilled.png"/>'; } else { ratingHtml+='<img src="/_layouts/15/images/RatingsSmallStarEmpty.png"/>'; } if(iAvg<4) // Empty stars { for(var p=0; p<4-iAvg; p++) { // Empty stars ratingHtml+='<img src="/_layouts/15/images/RatingsSmallStarEmpty.png"/>'; } } } ratingHtml += '</div>'; return ratingHtml; }
And finally, render the AverageRating value on the page: <div id="starrating"> _#= RatingHtml(ctx.CurrentItem.AverageRating) =#_</div>
The last step is to assign your new display template to a new result type based on the result type for Word documents you get out of the box. It’s easy: just copy the outof-the-box result type for Word and tell it to use your new display template. This makes the search results show the average rating of your document:
CONCLUSION SharePoint 2013 gives you the powerful FAST search engine, and the choice between setting this up on-premises or using Office 365. In either case, you can customize search in a number of ways through display templates, configuration of search web parts, search schema mappings and query rules. You can also leverage custom document processing and custom ranking models. In this article we’ve shown you how to add the average rating of each document to the search results page. You can package your solutions as SharePoint Search Apps or in other end-points that utilize the power of rich object model APIs, REST/OData or SOAP. Whatever technology you choose and how you want to customize search, we hope to see you in the cloud. We’re all in!
29
SECURITY from the ground up 30
By Jon McCoy
Š Shutterstock
When developing a system "security" can take a secondary importance as we tend to focus our construction efforts of building a scalable, resilient, and maintainable system; however the attack surface and the ways the system can resist attacks are critical to our End-Users. The developers and designers of a system are critical to the security landscape, tiny choices of a developer can determine much of what we have to work with. In the life cycle of development the sooner security feedback can be brought into the design process the better; the defendability of an enterprise often comes down to the constraints put in place by choices maid long ago by one or two developers.
Security is critical in today's world from your cell phone to your payment gateway, but what security was put in to defend you?
No program runs alone. No system is independent. Your bookkeeping software depends on Data servers, the radio in your car is connected to Twitter. Regardless of what your product is or does it is part of a larger layered system. If a Layer/Component/Program/System is compromised it can drastically impact the security of other systems both connected and disconnected. In my view, if a system is segmented and constructed in the correct way, it can minimize the impact of a security breach. If the system is crafted correctly, the security breach can be slowed. If the system has proper safeguards, a breach can reliably be identified and remedied. More security is not always better. Security can cripple a project. In the wrong hands, security can become a bad word. Security can start to represent wasted time, random delays, wasted progress and STRESS. A bad security expert will hand over a list of problems: "You have 100 Critical Threats, 50 High Priority, and 1,000 Green Concerns..." This can quickly put a project on a bad path. A bad solution can be worse than the original problem. Each security threat needs to be put in context and understood not simply given to a programmer with orders to fix it. A common misstep is in response to "your password BizName@!3z is insecure." A client could make the
password length 15 or 25 characters, make it a random password, force the user to change it every 10 days, with Hardware tokens and Biometrics... but is it more secure? The End-Users now write critical passwords under keyboards. People misplace a hardware token and call to request them be disabled, the Biometric software turns out to be easy to bypass and have critical security flaws. A good security expert can say this crypto is better than that crypto, you should not require a password length of 14+ for every user without augmentation, this security/system will fit or will-not fit this problem/group, this vulnerability is from this process or structure, this fix will or will not provide value, the tools or processes in production have this flaw that can be solved in X/Y/Z way. A good security setup will highlight your likely first attack points, it should account for the paths from any likely breach point to critical resources and have a tested responses to handle a security breach. ProTip for Developers: ProTip for Developers: Build a Security UnitTest or two, set aside one day a month for security and stability, don't ever expose your DataBase system always build a protection service layer around it. Get a yubikey=>www.Yubico. com; Join OWASP=>www.OWASP.org ProTip for Hackers: ProTip for Hackers: Desktop Developers are not trained in Security, they don't keep up on the new Jitter attacks for AES or the old CBC weakness, they don not clean SQL commands server side, they think hashed passwords are non-reversible.
Jon McCoy is a .NET Software Engineer that focuses on security and forensics. He has worked on a number of Open Source projects ranging from hacking tools to software for paralyzed people.
31
ERLANG
for C# Developers Your primary language may be C#, but I suspect you’re a polyglot. SQL for set-based operations on relational data? JavaScript for client logic in the web browser? By Bryan Hunter
Language complements like these are valuable. Most language stacks (.NET, Java, Ruby) aren’t complements though; they do similar kinds of things fairly well and compete on friction-at-task, library reach, and hipness. A true complement to .NET you may not be familiar with is Erlang. The things Erlang is good at are things at which .NET stinks, and vice versa. Erlang is a functional, dynamic, declarative, and concurrency-oriented programming language. Its sweet spots are fault-tolerance, concurrency and distribution. Erlang comes with a set of battle-tested libraries, patterns, and middleware called OTP. The foundation of Erlang and OTP is the Erlang Runtime System (the ERTS). The ERTS is more of a special-purpose operating system than a normal language runtime. Odd thing to say, so I’ll explain… Most language runtimes such as the CLR are thin wrappers over the underlying OS. They map concepts like Windows threads to .NET threads. The primary thing Windows knows about your C#/C++/Ruby code is that it can’t be trusted; Windows assumes the worst. In Erlang there is no concept of threads; the underlying OS and the Erlang concurrency primitives are on different planets.
32
Now imagine you were going to write an operating system to run code written in a single language (Erlang). Imagine your “OS” has a clear goal of supporting fault-tolerant, concurrent, distributed systems and anything else is secondary. Imagine as part of this pact your “OS” can place constraints and shape the language. This “OS” can tap in and know specifics about the code it hosts. It can trust the code. Neat! So how do those sweet spots fit together? If your primary goal is fault tolerance you first need a concurrency model that is safe, simple, and efficient. Next you need a distributed computing model that is safe, simple to setup, and transparent to code against. Why? A single process running on a single machine isn’t very fault-tolerant. In Erlang you model each concurrent activity as an Erlang “process”. Erlang processes are constructs of the ERTS and are not related to OS processes or threads. Erlang processes are fully isolated from each other (no shared memory). Each Erlang process has its own little heap & stack, its own garbage collector, and its own mailbox. A process can only communicate with other processes via its mailbox. A process can monitor other processes and receive an ‘exit’
message if the monitored process dies. The OTP concept of supervisor trees builds upon these primitives to define elegant restart strategies. An Erlang process is very lightweight in terms of memory (around one kilobyte) and more importantly lightweight in terms of scheduling. The cost of context-switching between Erlang processes is nearly zero. When an Erlang node (instance of the ERTS) is started it creates one scheduler for each processor core. If you create 100,000 Erlang processes they will be distributed evenly across the cores. Each of the schedulers goes round-robin through its processes giving each a chance to do 2,000 slices of work (called reductions) from its work queue. Once the 2,000 reductions are complete the individual processes can't prevent the scheduler from moving to the next process. Hogging and blocking are not possible. This and per-process garbage collection are why “soft real time” is often used to describe Erlang. Distribution in Erlang is very simple as well. Given two machines running Erlang (and a shared cookie) the one-liner: net_adm:ping(node@hostname) will form a two-node cluster. A process can spawn processes on any node in the cluster. A process can
.NE
g
n a l Er
© Sparkstudio/Shutterstock
T
send messages to any process in the cluster. If you have a compiled Erlang module on one node you can deploy it to the other connected nodes with the one-liner nl(modulename). All of this works even if the cluster is composed of heterogeneous operating systems and hardware architectures (e.g. x64 Windows 8 and ARM-based Linux). Another powerful feature of the ERTS’s is hot code loading which allows new code to be deployed while a system is running without shutting it down or losing state. Hot code loading works across distributed nodes the same as it does on the local node.
All very neat, but if you’re like most clever .NET developers you’re wondering “Can I just learn how Erlang works and plunder and port its ideas?” Short answer: no. Several hundred man-years of development have went into building and improving Erlang. It works because of its foundations. Facebook’s chat, CouchDB, RabbitMQ, GitHub pages, Amazon SimpleDB, WhatsApp, and Opscode Chef use Erlang. Follow their lead: learn it; love it. Add it to your toolbox box alongside C#. Enjoy a thing of beauty.
Bryan Hunter is a geek, a Microsoft MVP in C#, a partner at Firefly Logic, and the founder of Nashville Functional Programmers. Bryan is obsessed with Lean, functional programming, and CQRS.
33
COUCHBASE NoSQL for SQL Developers
34
Couchbase Server 2.0 ships with a “beer-sample” bucket (database) containing thousands of brewery and beer documents from the Open Beer DB (http://www.openbeerdb.com). These documents contain parent child relationships, detailed address information, geo-spatial details, and several other attributes. This article will use this sample data to demonstrate how to translate common categories of SQL queries to Couchbase views. By John Zablocki
The beer-sample bucket contains two types of documents, beers and breweries. A brewery document contains a name, information about its location, and other details. //key: ushers_of_trowbridge {
}
"name": "Ushers of Trowbridge", "city": "Trowbridge", "state": "Wiltshire", "code": "", "country": "United Kingdom", "phone": "", "website": "", "type": "brewery", "updated": "2010-07-22 20:00:20", "description": "", "address": [ ], "geo": { "accuracy": "APPROXIMATE", "lat": 51.3201, "lng": -2.208 }
A beer document contains details such as its brewery, alcohol by volume, and style.
Looking at these documents, there are a few key points that are important to understand. • Taxonomy is provided to documents conventionally, by including a “type” property • Beer documents reference their parent breweries by way of the “brewery_id” property • The key (id) of a brewery document is contrived in part by its “name” property • The key of a beer document is contrived in part by its and its brewery’s “name” properties It should be a straightforward exercise to imagine this data as relational. A BREWERY table would contain rows of brewery records with columns matching the brewery document properties. The BEER table would contain rows of beers with columns matching beer document properties. A foreign key constraint on BREWERY_ID would exist between the two tables. With the relational model in place, the first query to consider is simply one that retrieves all brewery records. SELECT brewery_id FROM breweries
//key: goose_island_beer_company_fulton_street-stockyard_oatmeal_stout { "name": "Stockyard Oatmeal Stout", "abv": 6, "ibu": 0, "srm": 0, "upc": 0, "type": "beer", "brewery_id": "goose_island_beer_company_fulton_street", "updated": "2010-07-22 20:00:20", "description": "", "style": "Oatmeal Stout", "category": "British Ale" }
35
This simple query retrieves the entire table of breweries, returning only its primary key. While a typical use case for RDMS development would probably see all or most columns returned in the query, the analogy to Couchbase views works best with only the primary key returned. A view in Couchbase Server is a definition for an index. You write a map function in JavaScript that creates an index on none, one or more properties from a document. A view is often referred to as a secondary index, because in Couchbase Server, the primary index is always the key that was used to insert the document through Couchbase Server’s key/value CRUD API. After creating a map function, it will examine every persisted document in a bucket. New or modified documents that have been persisted will cause the index to be incrementally updated. Knowing that all persisted documents will be passed through the map function means that the “select all breweries” function must know whether a document is a brewery. Recall from above that documents have a “type” property. This “type” property will be used to discriminate brewery documents. Before examining the JavaScript map function, .NET developers might find it useful to examine the analogous LINQ code.
The previous snippet is the JSON representation of an index that is returned to an SDK. The ID is used by the SDK to look up the original document using the key/value API. The key is the indexed property, null in this case. The value is an optional projection from the document, or some value over which aggregation will be performed. Using the .NET SDK, a query may be performed against the view to retrieve all brewery documents. var view = client.GetView("breweries", "all"); foreach(var row in view) { var doc = client.Get(row.ItemId); }
The snippet above demonstrates the typical pattern for using views with Couchbase, which is to use the view to get the original document by way of the key/value API. This is not a typical pattern in relational development. One would not likely select the primary key from a table only to go back to that table to get the remaining columns. The reason this is efficient in Couchbase is that the key/value API works against Couchbase Server’s built in cache. The index is read from disk and the documents are fetched from RAM.
var breweries = documents.Where(d => d.Type == “brewery”).Select(d => d.Id).ToArray();
The analogy isn’t perfect, as the result above doesn’t produce an index. However, the basic idea holds. Given a collection of documents, emit the ID for all documents of type “brewery.” In Couchbase, map functions use a special “emit” function, to add a record to the index. function (doc, meta) { if (doc.type == "brewery") { emit(null, null); } }
In the map function above, after a document is found to be of type “brewery” a record is emitted into the index. In the case of a “select all” view, there is no need to emit a property to be indexed. Views always contain the document ID (key from the key/value). So in the example above, the index contains the primary keys for all breweries.
To index breweries by name, the map function above needs only minor changes. The SQL equivalent is to add a WHERE clause to the SELECT example above. SELECT brewery_id FROM breweries WHERE name = ?
The map function needs to emit a key, which will be used to answer the WHERE clause portion of the query. The property (or properties) provided to emit as its first argument are used to answer WHERE clause queries. function (doc, meta) { if (doc.type == "brewery" && doc.name) { emit(doc.name, null); }
}
The index now includes the name properties in the key.
{"total_rows":1412,"rows":[ {"id":"21st_amendment_brewery_cafe","key":null,"value":null}, {"id":"357","key":null,"value":null}, {"id":"3_fonteinen_brouwerij_ambachtelijke_geuzestekerij","key":null,"value":null}, {"id":"512_brewing_company","key":null,"value":null}, {"id":"aass_brewery","key":null,"value":null}, … ] }
36
{"total_rows":1412,"rows":[ {"id":"21st_amendment_brewery_cafe","key":"21st_amendment_brewery_cafe","value":null}, {"id":"3_fonteinen_brouwerij_ambachtelijke_geuzestekerij","key":"3_fonteinen_brouwerij_ambachtelijke_ geuzestekerij","value":null}, {"id":"357","key":"357","value":null}, {"id":"512_brewing_company","key":"512_brewing_company","value":null}, {"id":"aass_brewery","key":"aass_brewery","value":null}, … ] }
With this new index, using the .NET SDK a query by name may be performed. In the example below, the view should return only a single row, but the view still must be enumerated to execute the query. var view = client.GetView("breweries", "all").Key(“aas_brewery”); view.Count(); //should be one
A more complex SQL query would group breweries by their location details. The query below would answer the question could answer the question, how many breweries are in Boston, Massachusetts.
SELECT FROM WHERE GROUP BY
country, state, city, COUNT(brewery_id) AS Count breweries (country = 'United States') AND (state = 'Massachusetts') country, state, city
The map function for this query is not much different than those above. The biggest change is that a compound key will be emitted. This compound key is simply an array of properties. function (doc, meta) { if (doc.type == "brewery" && doc.country && doc.state && doc.city) { emit([doc.country, doc.state, doc.city], null); } }
The index produced by this map function now contains an array for a key. {"total_rows":1229,"rows":[ {"id":"bersaglier","key":["Argentina","Buenos Aires","San Martin"],"value":null}, {"id":"malt_shovel_brewery","key":["Australia","New South Wales","Camperdown"],"value":null}, {"id":"lion_nathan_australia_hunter_street","key":["Australia","New South Wales","Sydney"],"value":null}, {"id":"tooheys","key":["Australia","New South Wales","Sydney"],"value":null}, {"id":"tooth_s","key":["Australia","New South Wales","Sydney"],"value":null} … ] }
37
To group and count the results, a reduce function is applied to the output of the map function. In Couchbase, three built in reduce functions are supplied. It is rare that one would write a custom reduce function, but it may be done. To get a count of breweries in a particular city, the built in count function is used. This function is expressed in a single call and will be fed the output of the map function above. _count
Applying this reduce function to the map output results in an index with aggregate data. {"rows":[ {"key":["Argentina","Buenos Aires","San Martin"],"value":1}, {"key":["Australia","New South Wales","Camperdown"],"value":1}, {"key":["Australia","New South Wales","Sydney"],"value":3}, {"key":["Australia","NSW","Picton"],"value":1}, â&#x20AC;Ś ] }
When this view was queried, a group parameter was applied to provide grouped results. Additionally, a group level parameter may be used to group at a higher level. var view = client.GetView("breweries", "by_location").Group(true);
Having applied a group level of 2, this aggregation result did not consider the city. {"rows":[ {"key":["Argentina","Buenos Aires"],"value":1}, {"key":["Australia","New South Wales"],"value":4}, {"key":["Australia","NSW"],"value":1}, {"key":["Australia","Queensland"],"value":1}, â&#x20AC;Ś ] }
The .NET SDK has a method for applying group levels.
var view = client.GetView("breweries", "by_location").Group(true).GroupAp(2);
The final query to examine is a bit more complex and addresses the master-detail pattern. Imagine the union of BREWERY and BEER rows, where the parent (brewery) row is followed immediately by its details (beers). To discriminate the breweries from beers, a second column is added. Brewery documents are marked with a 0 and beer documents a 1.
38
SELECT FROM UNION ALL SELECT FROM ORDER BY WHERE
brewery_id, 0 breweries brewery_id, 1 beers brewery_id brewery_id = '21st_amendment_brewery_cafe'
A similar view definition would emit a brewery document into an index, followed immediately by its beer documents. function (doc, meta) { switch(doc.type) { case "brewery": emit([meta.id, 0], null); break; case "beer": emit([doc.brewery_id, 1], null); break; } }
The map function above takes advantage of the fact that views use Unicode collation. Indexes are always ordered in what loosely resembles a case-insensitive, alphabetical order. Since the brewery_id of a beer document is the key (meta.id) of a brewery. Compounding the brewery key with a 0 and beer keys with a 1 forces the brewery to appear first, followed by its beers. While not all SQL queries may be expressed by a view, many certainly can. The samples above demonstrate just a few. Each of these views is detailed in the tutorial that accompanies the .NET SDK sample app. For more information, including geo-spatial querying and paging, visit http://www.couchbase.com/develop/net/current. {"total_rows":7303,"rows":[ {"id":"21st_amendment_brewery_cafe","key":["21st_amendment_brewery_cafe"],"value":null}, {"id":"21st_amendment_brewery_cafe-21a_ipa","key":["21st_amendment_brewery_cafe","21A IPA"],"value":null}, {"id":"21st_amendment_brewery_cafe-563_stout","key":["21st_amendment_brewery_cafe","563 Stout"],"value":null}, {"id":"21st_amendment_brewery_cafe-amendment_pale_ale","key":["21st_amendment_brewery_cafe","Amendment Pale Ale"],"value":null}, â&#x20AC;Ś ] }
John Zablocki is a Developer Advocate at Couchbase, where he is responsible for developing the Couchbase .NET Client Library.
39
MEET POWERSHELL
40
Illustration: DarkGeometryStudio/Shutterstock
DEVELOPERS:
Many developers are oblivious to the fact that shell scripting on the Windows platform has evolved. A few years ago, I decided to learn Powershell. While I am not an expert, I have found PowerShell to be a great help to me as a developer. Gone are the days of “shell envy” towards Unix platforms and the urge to install Cygwin. By Vidar Kongsli
AUTOMATION PowerShell has evolved into a powerful tool for automating Windows tasks. Its ability to automate builds, continuous integration, and deployment gives it a vital role in continuous delivery. From its inception in the mid-noughties, the capabilities of PowerShell have grown largely to include scripts for automation of many products such as Windows, Windows Azure, SQL Server, SharePoint and MsBuild. A simple Google search will give you lots of information on how to do specific tasks for almost everything. What is difficult to find, however, is information on how to use the language itself, how to structure your code for improved usability, and how to make your scripts readable and easy to use. CODING PRACTICES As developers, we are often interested in understanding a programming language in and of itself. We want to understand its features, its syntax and semantics, and how to program with it. Code snippets from Stack Overflow just don’t cut it. Just like any other piece of code, shell scripts have to be maintained. As the “Tao of programming” states;
Though a program be but three lines long, someday it will have to be maintained
Coding practices are important, especially when the scripts grow large. In order to uncover good coding practices, developers need to internalize the programming language. They need to intimately understand its syntax, semantics, scoping, and how to modularize code. OBJECTS In Powershell, everything is an object. Being a weakly typed, dynamic language, it is not always apparent what the object can do, or which class it might get its features from. Thus, introspection is key. There are a couple of ways to get information about your objects: $x = ... $x.GetType() $x | get-member
The GetType-method above might be familiar to .NETdevelopers. Since PowerShell runs on .NET, objects have members that are inherited from .NET. The get-member cmdlet (“command-let”) lists all the member methods and properties on the object.
FUNCTIONS AND SCRIPT BLOCKS Often, the cmdlet is thought of as the canonical construct in PowerShell. But when modularizing your scripts the most basic structure, I would say, is the function and the script block. Functions are very similar to functions in other languages, and their usage is similar, too. There are a number of ways to define functions:
function f1 { “Hello world” } function f2 { “Hello, $args” } function f3 ($name) { “Hello, $name” }
Functions can be executed like so: f1 f2 “world” f3 “world”
Note that f2 has an implicit parameter, $args, while f3 has an explicit parameter. Developers familiar with the C family of languages (like C# and Java) should note that parentheses are not used around the parameters when calling a function. Furthermore, if the If a cmdlet and a function have function takes more than one the same name, PowerShell parameter, the parameters are will always call the function. not delimited by commas, but This can be leveraged to by space. make changes and enhancements to built-in cmdlets, via
Another important thing about so-called proxy functions. functions that C # and Java programmers should note, is that the return keyword is syntactic sugar that is not necessary when on the last line of the function. In fact, all statements in the function that have a return type, and are not assigned to a variable or sent to another cmdlet, are returned in an array. Consider the following example: function f { “Hello, “ “world” md “Foo” | Out-null return “Bye for now” }
This function will return an array of the strings “Hello,”, “world,”, “Bye for now”. The cmdlet md returns a directory object, so if we did not pipe it to null, the directory object
41
would be in the returned array. Something to get used to. A script block is very similar to a function, and it can be passed around using a variable, and it is typically used in a pipeline, as shown below.
$myscript = { “Hello, $_” }
Note that a script block does not have any lexical closure with the context in which it is defined: Free variables in the block are bound in the context where it is executed. PIPELINES As in the Unix family of shells, a pipeline is a central concept in PowerShell. It allows us to combine cmdlets, filter content, transform content, iterate over and sort lists. For example: 4,2,5,1,6 | sort -desc # --> 6,5,4,2,1 4,2,5,1,6 | sort -desc | select -first 2 # --> 6,5 4,2,5,1,6 | where { ($_ % 2) -eq 0 } # --> 4,2,6
The script blocks discussed in the previous section can be used in pipelines. The foreach-object cmdlet can be used to iterate over all elements in the pipeline, and to execute the script block on each element. The current object is assigned to the implicit $_ parameter:
“Jack”, “Jill” | foreach { “Hello, $_” } “Jack”, “Jill” | foreach $myscript
PowerShell is a powerful tool for continuous delivery
42
SCOPING AND DOT-SOURCING When working with larger scripts, the code must be organized in order to be maintainable. The first thing you should familiarize yourself with is the concept of scoping. Said simply, each script file has its own scope, a function has its own scope, each module has its own scope, and there is one global scope. By default, a function is available in the scope of the script file in which it is defined. However, it can be defined to live in the global scope: function global:omnipresent { “I see you” }
Defining functions in the global scope is a powerful feature, but take care not to clutter the global scope. As a rule of thumb, you should define only a handful of functions and variables in the global scope. Keep it clean and simple. When executing a script file, any function defined in the file lives in a separate scope. However, functions in a file can be loaded into the current scope by “dot-sourcing” the file. This is an excellent way to load libraries into your scripts: . ./lib/helpers.ps1
# --> “Hello, Jack”, “Hello, Jill” # --> “Hello, Jack”, “Hello, Jill”
MODULES Script modules provide functionality similar to dot-sourcing a file, but there are some noteworthy differences: â&#x20AC;˘ A module is loaded using the Import-module cmdlet. There is also a cmdlet for unloading a module, remove-module. â&#x20AC;˘ A module can in effect have private functions that are not loaded into other scopes. This is done by explicitly naming the functions that are exported from the module, using the Export-modulemember cmdlet Additionally, there is a lot of functionality related to modules that I do not discuss in this article. When your module is under development - that is, when you are still making changes to it - you should use the force parameter when loading the module into your scripts. If not, the changes you make will not be visible to your scripts if an older version of the module is already loaded:
Import-module ./lib/helpers.psm1 -Force
1 Idempotent: your script should not fail if a task is already carried out 2 Extrovert: your script should tell the world what it is doing. Send status messages to the standard output device and error messages to the standard error device 3 Traceable: log progress to a file, consider the start transcript cmdlet. 4 Good citizen: your script should behave as expected. For example, it should always provide proper exit codes, and always leave the system in a proper state (even if failing) 5 Reusable: consider proper structuring, naming, and docu mentation. CONCLUSION PowerShell lets developers on Windows automate builds, continuous integration, and deployment. Learning the core concepts and structures of this language allows you to use it effectively. Properly structuring your scripts lets you create reliable and reusable automation scripts. Used properly, PowerShell is a powerful tool for continuous delivery.
GOOD AUTOMATION SCRIPTS PowerShell should be the preferred foundation for automating your development, build and deployment tasks. But what makes up a good automation script? How can this be achieved in PowerShell? Here are a few pointers:
<
><
>Should you join us?</ requirements years Of Experience degree excellent Technical Skills passionate About Technology triggered By Challenges
><
></
><
Vidar Kongsli is a managing consultant at Bekk Consulting in Oslo where he is responsible for .NET, SharePoint and continuous delivery groups.
><
>
>= 3 master ph .d
you Should Join ,
stilling@conceptos.no
E-mail us. :)
Code gets complicated really quickly. We all start with such simple, clean ideas.
© Shutterstock
But then complexity arrives it just happens - it builds and grows with every feature we tackle and with every line of code we write.
MVX: Model View Cross-Platform with MvvmCross By Stuart Lodge
To help understand and manage this complexity we - as engineers - introduce patterns - common code structures that we employ to help separate out our logic into comprehensible, testable and reusable chunks. Back in the 90s one of the first of these patterns that I personally encountered was MVC - Model View Controller - which splits out the data from the display and the interaction. Beyond that in the 00s I encountered Model-View-ViewModel - a pattern particularly popular in Window's development where XAML's built-in
44
DataBinding provides a natural home for frameworks like MvvmLight, Caliburn, and many, many others. Both MVC and MVVM have been powerful tools - they are the patterns behind many of the successful programs, products and apps of recent years. But during the last few years, the software and hardware landscape has changed dramatically: • Windows has lost its ubiquity; • New smartphone and tablet devices
have gained enormous popularity; • New operating systems especially Apple's iOS and Google's Android - have arrived, each of them bringing their own native APIs, their own increments in hardware capabilities and their own intrinsic lookand-feel for user experience. And with this new multi-platform environment comes new complexity for us engineers - and so now we start to build and experiment with new patterns - including patterns like ModelView-CrossPlatform - MVX.
WHAT IS MVX? The core values that define MVX are: 1. EVOLUTION - just as MVVM extended MVC, so MVX extends MVVM, attempting to reuse Model, View and ViewModel abstractions everywhere - not just on Windows. 2. SHARING - MVX tries to share Model and ViewModel code and functionality between platforms wherever it can, always looking to maximise the testability, reuse and maintainability of this 'common logic' 3. NATIVE UX - Unlike many other Cross-platform approaches, MVX does not try to unify the View layer – but instead it always allows designers and developers to present the View functionality in a device-specific, device-optimised and device-accelerated way. 4. RICHNESS - MVX always seeks to celebrate the richness of modern platforms - encouraging and facilitating the use of files, databases, disks, graphics, networking, location, vision, orientation, sound, voice, live tiles, notifications and the ever-growing pool of sensors and accessories that the device engineers are cramming into smaller and smaller devices. Through these values, MVX looks for apps to achieve 'native' user experiences while leveraging shared Models and ViewModels in the background. Apps built using MVX principles should be fast, delightful, powerful and offer familiar user experiences – apps should ‘feel at home’ on each and every device.
Unity3D gaming platform and for the Mac OSX Desktop via MonoMac and Xamarin.Mac
SO WHAT DOES THE CODE FOR AN MVVMCROSS APPLICATION LOOK LIKE?
Starting from a Windows-centric Mvvm base, MvvmCross:
Each MvvmCross application starts from a 'core' - a shared central Portable Class Library.
• Provides Data-Binding, ValueConverters, and ICommand implementations on all target platforms - providing an 'MvxBind' extension for Android XML and a set of 'CreateBinding' APIs for iOS UIKit elements. • Uses Portable Class Libraries to maximize sharing of ViewModel and Model code • Pushes Interface Driven Development - using Dependency Injection and a lightweight Inversion of Control container to ensure code is flexible, extensible and testable. • Provides 'Plugin' support - providing a common pattern on top of IoC which gives the shared, portable code access to the powerful platform-specific functionality on each device. This offers easy extensibility, especially when coupled with package management tools like nuget. • Is heavily 'convention' based trying to make the MVVM development feel 'natural' - trying to focus the developer's attention on developing his or her app rather than on the mechanics and challenges of MVX. • Is tool friendly - trying to work with UI designer tools, wizards and package managers, testing, refactoring and code editing tools - all again to help developers focus more of their time and effort on their app code, and less on MVX itself.
This core contains: • the Models and Services in your business logic • the ViewModels that encapsulate your app's screen-flow • an App class - the 'entry point' for your core - responsible for 'wiring up' all your Models, Services and ViewModels with the MvvmCross IoC container. On top of this core, each UI - each platform - then builds: • a set of Views – databound UI displays – normally one for each ViewModel in your core. • a Setup class - the 'entry point' for the UI - responsible for 'wiring up' all the Views • any support constructs that the platform requires (splashscreens, main file, etc) An example of this is a simple TipCalcuation app. There’s a complete worked tutorial for this available on h t t p : // w w w . g e e k c h a m p . c o m / a r ticles/ b uilding-a - co m ple te cross--platform-mvvm-app-withmvvmcross This tutorial takes you through the step for turning a simple design into a 5 platform reality:
MVVMCROSS – AN MVX LIBRARY MvvmCross is one library that aims to support these MVX values. It's a set of open source C# libraries, and is currently available for: • Windows Phone • Windows Store • Windows Desktop (WPF) • Xamarin.Android • Xamarin.iOS Beyond these, additional adaptions are also under construction for the
Figure 1 The TipCalc Tutorial - UI
45
THE FUTURE MVX and MvvmCross are both very young. The technological environment in which we are working is very dynamic. We are seeing an explosion of app opportunities, not only on mobile and tablet, but also on game consoles, on smart TVs, in connected homes, in in-car computing and on to the Internet of Things.
At the end of the tutorial you will end up with a single 5 platform solution in Visual Studio like:
We expect that these challenges and opportunities will see further MVX values, ideas and patterns evolve and grow. As users continue to embrace new, ever more personal technology, as businesses continue to build more and more interaction with these users, and as the demand and success of 'native over generic’ continues, it’s clear that the demand for cross-platform code will continue to rise.
Figure 2 The TipCalc Tutorial - Code
Many more examples, including much larger ones that show how to scale and to cope with native complexity, are available under Ms-PL licenses in GitHub – see https://github.com/ slodge/MvvmCross and https:// github.com/slodge/MvvmCrossTutorials/ For each of these, you’ll see that the same pattern of solution – a shared core and a set of platform-specific UIs. MVVMCROSS V3 – AKA HOT TUNA MvvmCross is now just 18 months old, but has already been used in many apps, including those aimed at both consumers and business.
This Spring and Summer, 2013, sees the release of MvvmCross v3 – Hot Tuna. Combined with the recent releases of the new Windows 8, Windows Phone 8 and Xamarin 2.0 SDKs, this release is really a significant step for MvvmCross and comes at a time when C# itself is in its most exciting dynamic state yet.
And thanks also to everyone who’s contributed to MvvmCross over the last 18 months, including many who have earned MvvmCross Badges of Awesomeness:
For more on Hot Tuna, please see my blog - http://slodge.blogspot.co.uk/ - which I update frequently with news and articles.
Figure 3 Some apps built on top of MvvmCross - Brüel & Kjær Noise Monitoring, CentraStage Asset Managment, Lions Official
46
THANKS Thanks for reading.
Stuart Lodge, @slodge, works as founder, salesman, project manager, developer, and tea–maker for Cirrious.
Annonse
Keeping up with it all… - how if insurance replace their old systems while gearing up for the future By Marianne Moe-Helgesen
Large development projects aiming to replace old mainframe systems face a challenge their predecessors did not: How on earth can we keep up with the rapid pace of technological innovation happening on the user and customer side? In the seventies and eighties, years could go by between significant changes in how the masses used technology. Now we have constant small revolutions. Only 6 years ago we did not have Twitter, iPhone or iPad (How did we manage to communicate at all back then?). Today smartphones, tablets and social networks are all important influencers on IT strategy. When If’s commercial division, serving small and medium-sized companies in the Nordic region, initiated the Waypoint project in 2006, the aim was to replace all existing core systems in the Nordic countries with one common system. Today it is still one of the largest .net projects in the Nordics, employing 100 people in total, both IT and business experts. The system has been rolled out in Sweden, Denmark and Norway and development for Finland is about to start. Although the goal of the project is still the same, the world in which the system exists has changed dramatically. In 2006 we had focus on the internal users, providing them with the
functionality they needed to serve the customers over the phone or email. That’s not how customers want to conduct business anymore. Now they want to interact directly with the insurance system themselves: Finding information, getting quotes, buying insurances and reporting claims are just some of the services customers want, and they want to do it at any time of the day, any day of the year. For the Waypoint project, this means that we constantly need to adapt to new technology and trends in addition to delivering planned features. The new usage patterns require that we rethink the entire architecture to meet the demands of the future. One example is that we, over the past few years,
have gone from a traditional synchronous request-response system to a distributed pub-sub architecture. The only way we have managed to achieve this is by having extraordinarily skilled developers and a group of heavily involved business experts. We are immensely proud of what the team has delivered, while at the same time going live with new business critical functionality each month. I doubt you’ll find many companies as large as If where the term “refactoring” is actually known and appreciated by the top management! Want to know more? Visit If insurance at NDC! We are always interested in meeting talented developers.
Partners:
ndcoslo.com
twitter.com/NDCoslo
48
49
JWT
OAuth OpenID
If you want to build web services that can be called by arbitrary clients (like browsers, web, mobile and desktop apps), you need to model them using the “Web API approach”. To secure such services, you also have to learn a set of new technologies (again). In those new technologies, not only the mechanics have changed (e.g. no SOAP and WS*), but also the philosophy is slightly different to what you might be used to. But – don’t panic! By Dominick Baier
50
© Shutterstock
When you read up on web API security, sooner or later you will encounter the terms OAuth2, OpenID Connect and JSON Web Tokens (JWT). These are all (emerging) standards that serve different purposes: OAuth2 is for authorization; OpenID Connect for authentication and JWT defines the token format for identity (authentication) and access (authorization) tokens. Let’s have a look at them one by one.
OAUTH2 The most popular of the three - but also the most misunderstood and controversial. OAuth2 is a framework that allows an application to request access to a backend - this may be its own backend or a third party one. Let’s take Twitter as an example: there a myriads of Twitter client applications out there that talk to the Twitter backend on behalf of their user. This user (also called the resource owner
in OAuth-speak) certainly does not want to give the client application (aka the client) his Twitter password. Instead a so-called OAuth flow is used to request an access token from Twitter (or more specifically their OAuth authorization server) that the client can use to access the Twitter backend services. This is done in a way that the user only discloses his password to the Twitter authorization server and not to the client application. I am
51
pretty sure you all have seen the Twitter “authorize app” page before (see Figure 1).
Figure 1 Twitter consent dialog
Once the client has requested the access token, it can use it access the user’s backend data (like his tweets). This approach has a number of benefits – the user does not need to type his passwords into a partially trusted client and the client developer does not need to deal with the password at all (which would typically include storing the password on the client device which is potentially not very well secured).
I heard OAuth2 is insecure?! OAuth2 is a very controversial topic. Part of the reason is the fact that the original creator and lead editor of the spec left the project and withdrew his name from all documents. Another reason is that a number of “big” implementations have recently been hacked. The truth is that the specification document is not really helpful in guiding people how to create secure implementations of OAuth2 and that’s what happened to Facebook and the like. I discussed the pros and cons in two blog posts I wrote - [3] and [4]
specific clients again (without having to change the password). So in other words OAuth2 describes the message format and interactions between the client, the authorization server and the resource server for various application types. OAuth2 is specified in the RFC 6749 and 6750. Whoever wants to learn more about it should at least read these documents [0] but in addition the book “Getting started with OAuth2” [1] as well as the corresponding threat model [2] have lots of good information. JSON WEB TOKEN (JWT) I mentioned identity and access tokens – they are in essence data structures describing resource owners, clients as well as data and operations a client has access to. The content of the token is up to the issuer. Unfortunately the OAuth2 spec also did not mandate a specific token type, which lead to a number of sub-optimal homegrown implementations. JSON Web Tokens (JWT) though are becoming the de-facto standard for tokens because they are reasonable simple to create, parse and validate and the fact that they are basically signed JSON data structures makes them easy to handle in almost all programming languages. A JWT consists of two parts – a header providing some metadata as well as claims. Claims come in two flavors – reserved ones like issuer or expiration as well as provider specific ones like roles: {
} {
}
Instead the access token gets stored on the client, which allows users to change their password for the backend service without breaking existing clients as well as revoking access for
52
"typ": "JWT", "alg": "HS256"
"iss": "http://myIssuer", "exp": "1340819380", "aud": "http://myResource", "name": "alice", "role": "foo,bar",
For wire transmission you base64 encode the two parts and create a signature of those strings separated by a dot, e.g.:
eyJhbGciOiJub25lIn0.eyJpc3MiOiJq b2UiLA0KICJleHAiOjEzMD.4MTkzODAs DQogImh0dHA6Ly9leGFt
You can find the specification for JWTs at [5] and Microsoft’s ready to use .NET implementation at [6]. A handy tool during development is the JWT debugger, that can be found at [7]. OpenID CONNECT I said earlier that OAuth2 is about authorization, or in other words, the client requests a token for a backend service. This token is typically opaque for the client and makes only sense in the context of the backend service it was issued for. But what if a client needs to validate the identity of a user, e.g. to provide personalization as well as access control to application functionality. In that case OAuth2 on its own is not enough (in fact some naïve approaches are inherently insecure – see [8]). OpenID connect defines the additional concept of an identity token, a token that describes user identity to a client (also called relying party in their lingo). In contrast to authorization (which is very application specific), authentication protocols must be interoperable to be really useful. As a result, the OpenID Connect specification is also much stricter in the definition of message layouts and content as well as token formats (namely JWT) to give us the needed pluggability. OpenID Connect is the next version of the “classic” OpenID and is designed to be more API friendly; it is very close to “RTM” and you can read about the various flows at [9]. SUMMARY The purpose of this article was to introduce you to the security relevant mechanisms in (modern) Web API communication and to make it easier to pin point which technology serves which purpose. My simple rule of thumb is: when a client requests a token for a backend, that’s authorization and OAuth2 is used for that. Does the client request
the token for its own purpose, that’s authentication and OpenID Connect is the protocol of choice (see Figure 2). Tokens should be in the JWT format, that’s an emerging standard and a number of “industry strength” implementations exist already. For .NET developers I can recommend having a look at our IdentityServer [10] and IdentityModel [11] open source libraries that implement several parts of the OAuth2 stack (amongst other things).
[0] http://tools.ietf.org/html/rfc6749 and http://tools.ietf.org/html/rfc6750 [1] http://amzn.to/12ebbqi [2] http://tools.ietf.org/html/rfc6819 [3] http://leastprivilege.com/2013/03/15/ oauth2-security/ [4] http://leastprivilege.com/2013/03/15/ common-oauth2-vulnerabilities-and-mitigation-techniques/ [5] https://datatracker.ietf.org/doc/draftjones-json-web-token/ [6] http://nuget.org/packages/Microsoft.IdentityModel.Tokens.JWT/
[7] http://openidtest.uninett.no/jwt [8] http://www.thread-safe.com/2012/01/ problem-with-oauth-for-authentication.html [9] http://openid.net/connect/ [10] https://github.com/thinktecture/Thinktecture.IdentityServer.v2 [11] https://github.com/thinktecture/Thinktecture.IdentityModel.45
Dominick Baier is an internationally recognized expert for security of .NET and Windows applications. You can find a wealth of security related resources as well as conference slide decks and tools⁄sample code at Dominicks blog at http:⁄⁄www.leastprivilege.com Figure 2 Authentication vs Authorization
OpenID Connect
OAuth2
JSON Web Tokens
53
NDC is one of the World’s largest conferences dedicated to .NET and Agile development.
Book your tickets now! • 2-day workshop • 3-day conference 1-day conference
NOK
8.200,–
2-day conference
NOK
9.700,–
3-day conference
NOK 11.200,–
1-day Workshop
NOK 5.900,–
2-day Workshop
NOK
All Access Pass
NOK 18.200,–
8.900,–
Some of our signed speakers:
ndcoslo.com
Stop Wasting Your Life! Simply Beating the Waste of Overproduction
56
Š Shutterstock
By Russ Miles
TAKING THINGS SERIOUSLY: LIVES AND MONEY Whole lives and billions of pounds are being wasted every single day in the software development industry. Whether you’re building commercial software features or maintaining the longest-running, in-house batch process on Earth, your life, and others, could be being wasted.
THEN (T-16 YEARS) => THEN (T-10 YEARS) It wasn’t that long ago, 16 years roughly, when I remember starting my journey towards becoming a professional software developer. That is “being paid to create software for other people to use and sometimes buy”.
find out this mythical approach that I almost didn’t hear what came next…
“… 8 months we spend modeling, and then 1 month is spent coding”
A CASE STUDY IN WASTE I was two weeks into a consultancy engagement with a large, Londonbased bank. I’d be brought in to help a team to begin to deliver and this was already beginning to happen.
I’d been a programmer for a couple of years and had experienced a collection of the worst development practices known to man as I attempted to build control systems for various batch processes. I experienced everything from schedule slippage to ‘testing in production’ within this bootcamp of my career, and to say I was desperate for a better way would have been an understatement.
The goal had been set to build some software that would reduce the amount of time people would need in order to do a menial task, freeing them up to honestly do something more valuable. You could’t get more lean and ethically correct than that in a bank, and I was overjoyed to take the challenge on.
I can remember very clearly the desperation I felt as I interviewed for my first full-time, real development position. I felt the interview was going well, and then the interviewer leaned forward and asked me the question I feared most, namely, “What’s your preferred process for building software?”
I was then invited to go and see the process that our software was supposed to reduce the waste within and suffered what can only be described as the biggest conflict of my working life so far.
I had no idea how to respond, but vaguely remember positing some ideas around “working closely with users and trying to deliver things quickly in a prototyping way”. This was some time before the Agile Manifesto and I am even a little proud that I was essentially stumbling over the idea of short iterations in order to experiment to derive what is really needed by the potential users of the software.
The problem is so endemic that we don’t even notice this tragedy on our own doorstep.
While the outcome was obvious, the problem was far from it. I had the right team, the right practices, the right technologies and the right process to deliver valuable software, and I’d realized that we were going to fail spectacularly. Why? DELIVERING VALUABLE SOFTWARE WAS THE GOAL Let’s take a step back for a moment. It’s worth recalling first that we’ve come a very, very long way and, since I can’t universally canvas everyone in the industry, I’ll share a personal view on how far we’ve come.
My interviewer was not impressed. It sounded too much like hacking to him and so my ideas were set back a half a decade or so. No, instead he leaned forward and, with a side-long smug smile creeping over his face and a conspiratorial tone to his voice, said:
“We typically run 9 month projects…” Ok, 9 months is plenty of time and must buy some sort of small iota of job security I reasoned. Then he continued,
I was stunned. The interviewer actually used the monkey-grinding-theorgan mime as he mentioned 1 month of coding. I was two young and impressionable to know better, and so 6 years of my professional life commenced when I worked on many such projects, using forward/reverse and even roundtrip engineering, and I can honestly say that I only ever delivered one or two products out of these projects at all in that time period so either the original statements were untrue, or my presence on the company’s teams had a disastrous impact on their ability to deliver. I think the evidence available thankfully supports the former. If there was a dark moment of the soul for software development in my career, this was it. THEN (T-10 YEARS) => NOW Then everything began to change. Light appeared at the end of the tunnel and, for once, it did not look like a train coming in the opposite direction.
“We know how to deliver the right software here; we always deliver ontime and to-budget”
The Agile Manifesto[1] was published and methods began appearing that supported this new way of working, a way of working that ultimately I’d always held close to my heart. I’d always firmly believed that software was about helping people to collaborate and work together, and so the principles of this hallowed document chimed immediately with my jaded experience of model-driven-failure.
You can imagine my excitement. This was music to my ears. I was so keen to
The key phrase for me, above all others, was the following agile principle:
57
“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”
And then they simply state, “All I wanted was a kiss and a smile when you got home”.
This was a goal I could get behind. The sort of goal that I’d be happy pursuing for the next 10 years or so.
You’ve over-produced, dramatically.
And that’s exactly what I did, and I bet you did too. I adopted and customized processes to help myself and teams I consulted with deliver. I helped people to adopt practices such as pair programming and clean code in order to help them build their software right. On the “building the right thing” front, I embraced Test-Driven Development to build my software right, BehaviourDriven Development to do test-driven development right, and Specification by Example to try to get the right amount of tests and examples so that my software was right for its purpose. I was on a mission to deliver the right software, the right way because that hit the goal of “early and continuous delivery of valuable software”. I was simplifying the process of delivering valuable software and helping teams deliver more. Unfortunately for my team in the case study, that goal was dead wrong. THE ALLEGORY OF THE UNSATISFIED PARTNER Imagine if you will that you’re in a loving relationship with a partner of your choice. You come home after a hard day’s work and notice that your partner feels in need of something. All the signs are there that something is required but you’re not entirely sure what and so you embark on a marathon of various jobs around the home because you assume that’s what they’d appreciate. You mow the lawn; you prepare the dinner; you light the candles, order the movie and set the tone with the right music. If you have kids, then you put them to bed without a second thought. Finally, exhausted after doing everything you can think of to satisfy your partner, you sit down opposite them to enjoy the dinner only to notice that they’re now looking less happy than when you started.
58
OVERPRODUCTION, THE CHALLENGE FOR THE NEXT 10 YEARS Unfortunately for our industry, the Allegory of the Unsatisfied Partner is how we’ve been living our professional lives for the past half a century. We’ve assumed that people know what they want, we’ve assumed that we understand what they want, and we’ve assumed that software is the answer. Wrong, wrong and dead wrong. It may come as a shock but it’s a truism that people don’t really buy software, they buy a changed situation and ability to do something they couldn’t easily or simply before. They really buy: Situation Now => Some Change => Situation Then Often they aren’t sure what the end state should be, and then the transformation function looks more like: Situation Now => Some Change => Probable Intermediate Situation => Some More Adjustments => Possible Situation Then There may be many routes from the situation now to the situation then, and sometimes the building and delivering of software might be the simplest and right answer to enable those changes. However sometimes software might not be the right answer and unfortunately we’re not really equipped to consider anything else other than software. We’ve spent the last 10 years learning how to deliver valuable software frequently, but that implies that the answer is always software by the time it becomes our problem. Hopefully not to get too graphic, we’re turned the tap on at the waste overflow pipe, we’ve fixed the hose to our mouths and now we’re wondering why we’re drowning in a high velocity flow of excrement and wondering why people are not happy about it.
EXPLORING OPTIONS AND HIGHLIGHTING ASSUMPTIONS BY BUILDING ROADMAPS, NOT ROADS (OR, WORSE, TUNNELS) Back to the case study and my epiphany and resulting conflict. I’d realized that even with a great team, great technology and even that the software was going to get the right result, we were still failing. The best way to begin to understand this failure is is to look at the roadmap for the product being delivered. It went roughly like: Manual complicated process completed in minutes => Write Software over the course of ~2 years with team of developers => Software that enables process to be completed in seconds Take a look at that again, does that look like a product roadmap to you? I’ve been lost in most major cities around the planet and I can tell you, it’s nothing like any roadmap I’ve encountered. In my roadmaps there are alternate routes, possible other means of transport, transient traffic condition warnings, roadwork warnings; to get from A to B via C I explore each of the possibilities and then experiment with the route as I run it, weighing up my options as I go. It’s not a perfect strategy, but at least it recognizes the options. The fact was that the product roadmap presented above was oversimplified because it focussed on the wrong goal. Instead of exploring possible options in a real roadmap, the oversimplified goal encouraged a road (or even a tunnel) from one point to another. A more balanced goal could be better summarized as being: Manual complicated process completed in minutes => ? => Process to be completed in seconds The key there is that the route to that result was not defined and the goal is broadened to not include the solution. It turns out there were many options to enable the goal, such as: Manual complicated process completed in minutes => Customize existing software => Process to be completed in seconds
Under each option and route there are assumptions that need testing, just as I test my planned route and change it when working my way around Los Angeles. That’s the essence of a true product roadmap; by setting the right goal you can then begin to ask the right questions to discover the many different options available to enabling that goal.
There’s a reason I call kids mini-philosophers, and to a parent they can sometimes be as easy to contend with as Socrates was on a bad day at the marketplace. I’ll spare you the rest of the discussion, needless to say at some point I utter the parent’s final recourse of, “Just because”, which I’ve heard sometimes delays the discussion until they hit their early teens.
In my case study, the right questions had never been asked because the goal was always software.
What Charlie and Mali are doing, which I’ll call the “MACH” approach, is very similar to many thinking tools that we do a pretty good job in software development of ignoring. The most well known of these is the FiveWhy’s approach, as originally codified by Kiichiro Toyoda's father Sakichi. In this approach there are not really 5 why’s involved, that’s not the important point; the important point is to really figure out why you’re doing something in order to build a proper roadmap towards a solution and explore the options on how to get there. The aim is to get to the root of a problem in order to embrace the right options for a solution.
ASKING THE RIGHT QUESTIONS; THE MACH APPROACH They say that kids can teach us a lot, and my kids are no exception to this rule. Charlie is 5, Mali is 4 and they spend most of their time with me asking me the same question over and over and over again, occasionally varying the context and often tagteaming the interrogation. Here’s an example: Mali: “Where do you go Daddy during the week?” Me: “Work.” Charlie: “Why?” Me: “Because it makes us money to buy things and I love creating software.” Mali: “Why?” Me: “Because people need software.” Charlie: “Why do they need software?” Me: “Well, funny you should mention that because sometimes they don’t and … well … isn’t it getting late and time for bed?” Mali: “Why do we need to go to bed…?”
In our case study, the real goal had been defined too narrowly. The real goal had nothing to do with software, and had everything to do with simplifying the process such that people could spend their time doing more valuable things. If we’d asked different questions we could have built a roadmap to enable the right impact to meet the right goal:
• Why the change is important; why this goal? • Who the change will affect, and who will enable the change? • How the behavior change that’s necessary to achieving the Why could be enabled? • What needs to be built or enabled for the behaviors to result? These 5 questions are the cornerstones of one simple approach to building a real product roadmap, Impact Mapping[2]. An Impact Map helps you sensibly explore the options and elicit underlying assumptions to enable you to pick the simplest journey that will enable the most valuable result. AVOIDING OVERSIMPLIFICATION Merely asking why is not enough however. Even the why itself needs to be carefully examined. Let’s take a look at the two goals stated from our case study and from the retrospective journey of software development over the last 10 years: Goal 1: Case Study - “Software that enables process to be completed in seconds” Goal 2: Software Delivery for the last 10 years - “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” There is a fundamental flaw, brought on by a non-explicit assumption, in both cases. In both cases the goal has been oversimplified. Oversimplification is where you reduce a concept down so far that it no longer delivers on the purpose and values it originally served. Or, as Edward de Bono more succinctly puts it[3] “Oversimplification means carrying simplification to the point where other values are ignored”. In Goal 1 the goal is over-simplistic as it prioritizes one solution approach, one ‘What’ if you will, over another because the underlying values of the delivery team are taking priority over the business stakeholders. The goal
59
has been defined in terms of what the delivery team want to, or can, do and not explored the other potential options as a re-framed and more balanced set of underlying values would encourage.
the right change in the right way. This is a small but important step for our industry and it has the potential to stop us wasting lives and money on building the wrong things, even if we build them right.
Goal 2 is oversimplified for much the same reason. It assumed that the customer wants what we, the software developer, want to deliver; namely software. When you consider that assumption a little more closely you soon realize that that prioritization of values dramatically reduces the potential options that could be considered.
There are a number of techniques that I’ve seen work with various clients but, as a starting point, I’d like to suggest just two behaviors that you could begin to apply in your new role as ‘change developer’:
Oversimplification is another large contributor to overproduction and Impact Mapping helps to avoid oversimplification by providing a simple tool for exploring every aspect of your roadmap to ensure you’re explicit about the values and assumptions that you’re making about the route to an eventual solution. DELIVER VALUABLE CHANGE: BUILD ROADMAPS, NOT ROADS In conclusion, I contend that we’ve oversimplified what we do as a profession and a small broadening of our remit will have a huge impact on our ability to beat our next challenge, the waste of overproduction. I’d like to suggest a reframed challenge for the next 10 years of software development evolution. I’m not precious on it taking 10 years but seeing as roughly every 10 years we come to a new realization I think we could do worse than getting behind a broader goal. Instead of: “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”
- Apply Simplicity Practically to every aspect of what you do, being careful at all times to Avoid Oversimplifying, particularly when it comes to goals as that’s a sure-fire shortcut to overproduction. It’s going to be an interesting journey over the next 10 years, I hope you’ll join me in making sure we spend our lives productively and profitably by focussing on enabling the right changes rather than just building software right.
I guess you can’t win them all. Maybe together we can win some and stop wasting money and lives on delivering the wrong thing? I’d certainly like to think so. AUTHOR BIOGRAPHY Russ Miles is Principal Consultant at management and software development consultancy, Simplicity Itself (www.simplicityitself.com). Providing training and consulting with his clients, Russ simplifies solutions, processes, practices and technologies to enable the right change to be delivered at the right time. References [1] http://www.agilemanifesto.org [2] “Impact Mapping: Making a Big Impact with Software Products and Projects”, Gojko Adzic [3] “Simplicity”, Edward de Bono, 1998
AND FINALLY, WHAT HAPPENED IN THE CASE STUDY? Ah yes, the case study. I’d left you with my epiphany and I’ve explained that through impact mapping, applying simplicity and broadening our remit I’d hit a conflict with what we were planning.
“Our highest priority is to satisfy the customer through early and continuous delivery of valuable change.”
So the outcome was that I recognized that by observing the process in action, and exploring a roadmap of options using the impact mapping technique and then experimenting with the simplest options along the journey I could obtain the same goal without developing any software at all. My epiphany looked like the following:
It is my belief that over the next 10 years we will stop focussing quite so much on how to deliver software at all, and begin to focus on how we deliver
Manual complicated process completed in minutes => Introduce small change in behavior => Process to be completed in seconds
I’d like to suggest that we explore the processes, practices, techniques and tools that enable us to state:
60
- Always ask Why (but not always in the same way, otherwise you run the risk of the annoying side of the MACH approach) and, on top of why, ask Who (it will affect; will enable the change), How (the behavior change that you’re looking to enable) and What (needs to be built or enabled for the behaviors to result)
No software required; I’d done myself and my team out of a job and I dutifully reported this to the project management and business stakeholders. I was over-ruled and the project commenced without delay and as far as I know much time and money is still being wasted on building a complex software solution to the original narrow goal.
Russell has been working in the software development industry working since 1997 in a range of domains including Search through to Defence. His roles in those domains have progressed through Developer, Senior Developer, Team Lead, and Senior Technologist responsible for multi–million pound research. Twitter: @russmiles Email: russell.miles@simplicityitself.com
Courses with
MIKE COHN CERTIFIED SCRUMMASTER - CSM – MIKE COHN This two day course—taught by author and popular Scrum and agile trainer Mike Cohn—not only provides the fundamental principles of Scrum, it also gives participants hands–on experience using Scrum, and closes with Certification as a recognized ScrumMaster. DESCRIPTION During the ScrumMaster class, attendees will learn why such a seemingly simple process as Scrum can have such profound effects on an organization. Participants gain practical experience working with Scrum tools and activities such as the product backlog, sprint backlog, daily Scrum meetings, sprint planning meeting, and burndown charts. Participants leave knowing how to apply Scrum to all sizes of projects,
from a single collocated team to a large, highly distributed team. YOU WILL LEARN Practical, project–proven practices The essentials of getting a project off on the right foot How to write user stories for the product backlog Why there is more to leading a self–organizing team than buying pizza and getting out of the way
How to help both new and experienced teams be more successful How to successfully scale Scrum to large, multi–continent projects with team sizes in the hundreds Tips and tricks from the instructors ten–plus years of using Scrum in a wide variety of environments COURSE DATE Oslo: 3 June, 17 Sep, 9 Dec London: 24 Sep, 3 Dec
CERTIFIED SCRUM PRODUCT OWNER - CSPO – MIKE COHN Certified Scrum Product Owner Training teaches you, the product owner, how to use the product backlog as a tool for success. As you watch the product take shape, iteration after iteration, you can restructure the Product Backlog to incorporate your insights or respond to changes in business conditions. You can also identify and cancel unsuccessful projects early, often within the first several months. The Certified Scrum Product Owner; course equips you with what you need to achieve success with Scrum. Intuitive and lightweight, the Scrum process delivers completed increments of the product at rapid, regular intervals, usually from every two weeks to a month. Rather than the traditional system of turning a
OSLO - www.programutvikling.no
project over to a project manager while you then wait and hope for the best, Scrum offers an effective alternative, made even more attractive when considering the statistics of traditional product approaches in which over 50% of all projects fail and those that succeed deliver products in which 64% of the functionality is rarely or never used. Let us help you avoid becoming one of these statistics. YOU WILL LEARN Practical, project–proven practices How to write user stories for the product backlog
Proven techniques for prioritizing the product backlog How to predict the delivery date of a project (or the features that will be complete by a given date) using velocity Tips for managing the key variables influencing project success Tips and tricks from the instructors fifteen years of using Scrum in a wide variety of environments COURSE DATE Oslo: 5 June, 19 Sep, 11 Dec
LONDON - www.developerfocus.com
Grid Computing on Windows Azure
In 2010 the Windows Azure platform was launched into production, and there was much talk of cloud computing and the benefits that it would bring to developers and organizations. By Alan Smith
The ability to be able to deploy applications to a datacenter in minutes and to be able to scale these applications massively moved the boundaries of what was possible, and created new opportunities for individuals and organizations that were willing to explore this rapidly evolving technology. Having an interest in Azure and all things cloud I had been delivering a number of presentations and developing training material for the Windows Azure platform. As I am a developer at heart I try to make my presentations as demo intensive as possible. Having sat through a number of presentations on how great cloud computing will be, and how it will deliver rapid and massive scalability I started thinking how I could demonstrate this scalability live on stage during a one-hour session at a conference. I started my IT career in 1995 as a 3D animator and graphics programmer for a company developing CD-ROMS that taught physics to school kids. At the time text-based ray-tracers were popular, and I had played around with a ray-tracer called PolyRay as a hobby, and got addicted. Ray-tracing takes an enormous amount of compute time, as many calculations must be made for each pixel in an image, and it takes many images to make an
62
animation. Creating a render farm ondemand and then scaling it to a large number of processing nodes sounded like a great scenario to show off the potential power of cloud computing. THE TWO-DOLLAR DEMO Creating a basic application that could render 3D animations using PolyRay in Windows Azure was surprisingly easy. Windows Azure allows Worker Roles to be created, which are share some similarities to Windows Services. The PolyRay executable file could be included in the Worker Role project, and then executed as a process when running in a Windows Azure datacenter. The Windows Azure storage service provides blobs, queues and nonrelational tables. Blobs act like file storage, and were used for the textual scene description files, a queue was used to queue up the processing jobs, one message for each frame in the animation, and a table was used to store the processing statistics. The use of a queue allows the workload to be distributed across multiple worker roles, allowing the application to scale to hundreds, or even thousands of render nodes. The first time I ran this application as a demo was in Göteborg in October 2010. I was using the Windows Azure allowances on my MSDN account, which was limited to 20 CPU cores,
so I started off with one Worker Role instance and then scaled to 16 instances live during the demo. It took about 15 minutes from the time I requested the additional instances until they were provisioned and processing animation frames. During the one-hour session I was able to render a fairly simple 200 frame animation. The cost of running single-core instances in Windows Azure was $0.12 per hour, running 16 instances for an hour cost $1.92. Add in a few cents for the storage charges that would be incurred and I had what a jokingly referred to as “My Two-Dollar Demo”. SCALING-OUT, 256 WORKER ROLES AND KINECT I ran the two-dollar demo a number of times at conferences and user groups, I liked the way it would demonstrate how cloud computing could deliver rapid scalability and do it in a very cost effective way. Running 16 instances was OK, but I really wanted to demonstrate something that would be difficult and very expensive to achieve with an on-premise solution. Being a developer the next step from 16 is 256, so I decided to scale out my demo to use 256 worker roles. Moving from a $2 demo to a £30 demo would require a more impressive animation, I wanted something
that would look cool, and be fun to work with in a demo. Back in the 90’s I always wanted to create a ray-traced animation of one of those pin-board desktop toys, but with the limited processing power of PCs at the time it would have taken several months to render. After a few hours experimenting with the Kinect SDK and Kenect sensor from my X-Box 360 I was able to capture depth data and save it as a series of image files. Reading these images into a C # application and using the pixel data I was able to create a scene file that describes a pin-board with the pin positions set using the depth data. This scene could then be rendered in PolyRay to produce an image. Rending a single frame took about five minutes, so creating an 80 second animation with 2,000 frames would take almost a week if a single instance was used. With 256 instances it would be possible to complete the animation during a onehour conference session. I talked to a few developers on the Azure team about the feasibility of running such a demo live, I remember asking someone “What would happen if I tried to spin up 256 Worker Role instances at once?” the answer was “I don’t know, but it would be really cool to try!”. I have run the demo a number of times at user group events and larger conferences. I typically start by running 4 instances, and then changing the scalability setting to 256 instances about 10 minutes into the session. It typically takes between 15 and 25 minutes for the extra 252 instances to be provisioned and start processing, and the animation rendering has always completed before the end of the session. There have been a couple of times when it has taken longer to scale out but, so far, the demo has never failed. GRID COMPUTING SCENARIOS Although the animation techniques I am using are dated, the use of cloud platforms for grid computing scenarios is an emerging field. Provisioning and maintaining large scale compute resources on-premise is challenging
and expensive and often out of reach of smaller organizations. The ability to create massive compute resources on-demand and only pay for them when you are using them creates a lot of opportunities that were not available a few years ago. Public cloud platforms have the capacity to allow users to provision tens of thousands of server instances that can analyze and process petabytes of data. These environments can be utilized for a few days, or even a few hours, and then de-provisioned when no longer required. The current pricing for public cloud resources brings this compute power within reach of smaller organizations and research teams.
Alan Smith works as a developer, trainer, mentor and evangelist at Active Solution Stockholm where he specializes in Windows Azure development and training. He is one of the lead contributors to the Azure development community. Feel free to contact him with any questions on cloudcasts.net@gmail.com.
63
A FIRST PATTERN OF TPL TPL (Task Parallel Library) and Dataflow have been around the .NET framework for a while now, and I'm still surprised at how relatively obscure they are in the general .NET developers knowledge set. Both provide elegant solutions to a number of hard problems in concurrent application development, providing the ability to create much simpler code than when just using the regular threading libraries in .NET. By Michael Heydt
This article is the first in a series of articles, blog posts, and conference presentations that I will be giving on using TPL and dataflow to solve many common concurrency patterns in desktop, and in particular, desktop trading systems. THE FIRST PATTERN OF TPL: SCATTER AND GATHER This first article will explain a common pattern that I refer to as "scatter-gather", which is a common need of many financial systems when they start execution (as well as other types of systems). At the start of these applications it is commonly needed to retrieve data from many external systems while continuing to allow the user to continue with other actions while the data is gathered. Often, there are dependencies in this data, which requires all or a subset of the data to first be gathered before other data, and before any data is organized for the rest of the application. This "scatter-gather" pattern has at least the three following requirements that must be handled and which I’ll discuss implementation with TPL: 1. Execute 2 .. N data requests in parallel and preferably asynchronously 2. While requests are pending, the user can still perform other functions (UI is not blocked) 3. When data is received from all requests, execution continues to take action on all data items Traditionally coding a solution for this with threads tends to get complicated, with the need for creating multiple threads, monitoring those threads for completion, needing to pass data from the threads back through shared and concurrent data buffers, and also the issue of synchronization with the user interface thread when the data is ready. TPL has been designed to take all of these issues into account without the need to explicitly code all of these constructs. Lets examine some of the constructs in TPL that facilitate coordinating this complexity, using a few simple examples. Tasks, unlike threads, can return data upon their completion. Threads are in a sense fire-and-forget. If you want to return data from them, you need to code the thread to put the data into memory somewhere. And to do this, you need to make sure that if the threads servicing all of your tasks share a buffer, that you provide synchronization around that buffer. In comparison I like to think of tasks as being "promises" to perform a unit of work at sometime in the future and then potentially return a result (tasks can, if you want, return void). The unit of work may or may not be executed in parallel by TPL, but using the default task scheduler in .NET it will run on the thread pool. When the task is finished with the unit of work, it will provide a simple means of synchronizing and retrieving the result of the unit of work back to its creator.
If you think of the unit of work in terms of a function, then you are on your way to being able to utilize tasks effectively. Consider the following code: var f = new Func<int, int, int>( (a, b) => {
});
Thread.Sleep(1000); return a + b;
Console.WriteLine("Waiting for completion"); var r = f(1, 2);
Console.WriteLine(r);
Console.WriteLine("Continuing on to do some other work");
This declares a function to add two integers (imposing a nominal delay of 1 second), adds 1 + 2, and writes the result to the console. The function f(1,2) is executed synchronously, meaning flow of code that calls the function halts until the function returns. This program will halt for one second before writing result of the computation. This is normal program execution, albeit a little more functional in nature with the inline function being declared. Now let’s change this to execute as a task to introduce asynchronous processing with TPL: var f = new Func<int, int, int>( (a, b) => {
});
Thread.Sleep(1000); return a + b;
Console.WriteLine("Waiting for completion");
var task = Task<int>.Factory.StartNew(() => f(1, 2)); task.ContinueWith(t => Console.WriteLine(t.Result));
Console.WriteLine("Continuing on to do some other work"); Console.ReadLine();
Instead of directly calling the function, the program instructs the default task scheduler to execute the processing of the function asynchronously to the caller via the StartNew method on the static Factory property of the Task class. StartNew returns a task object, which is the representation of the “promise” to do work and return a value in the future. Execution continues for this method, and the console message that the program is continuing is displayed, and the task completes later on its own schedule. So, how do we know when this task is complete? And how do we get the result of the task? This is the purpose of the ‘ContinueWith’ method, as well as the ‘Result’ property, of a task. When a method that is executed via a task is completed, the TPL will signal that fact, and this can be automatically hooked into by passing a method to the task via its ContinueWith method, which the TPL will execute when the task completes.
65
This method that the TPL executes upon task completion is passed a single parameter which is the task that has completed. In this example, the value of ‘t’ would be a reference to the same object as the variable ‘task’. This is convenient as the continuation method may not be inline like this example and therefore this provides the reference to the completed task. From that reference, the value that is returned from the method executed by the task is available with the ‘Result’ property, which in is an integer value and will be ‘3’.
The method then calls the method to add those results, returning that value as the result of that third task, which when complete writes its output to the console with another continuation.
It is worth noting how elegant this code is compared to using normal threads. To handle this situation in that model we would have to introduce one of the asynchronous patterns, either APM (async programming model) or EAP (event-based async pattern). For reference on these please see http://msdn.microsoft.com/en-us/ library/jj152938.aspx. The pattern that this program uses is referred to as the TAP task-based async pattern, and compared to EAP or APM, TAP has already saved us a lot of code reduced potential errors.
SUMMARY Although trivial in its computations, this example has shown the simplicity in creating and composing tasks and their results into further processing actions. The simple calculations in this example can be replaced with much more complicated actions, with a real-world example from the domain of trading systems being the request of current equity positions in parallel from multiple exchanges, combined together with enrichment data from internal systems that is added after the market data is retrieved, and then the subsequent creation of view model and view representation of the data.
But this is not what I consider the greatest benefit that TPL provides for us. The TPM model also allows us to do composition of tasks, providing simple orchestration of processing by taking the results of those tasks and flowing them into other tasks. (There are other benefits, but they are beyond the scope of this paper). Extending this example a little, let’s change the code to execute the function twice in parallel (adding 1+2, and 3+4), and when both are complete, add the results of those two values, which will result in output of 10 to the console. var f = new Func<int, int, int>( (a, b) => {
Reconstructing this with equivalent thread-based code is actually non-trivial compared to the previous example. There are a number of techniques to implement this with threads, all of which are nowhere as simple, and all more error prone than this simple TPL code.
This article has also only focused on this first pattern, and not covered other valuable TPL concepts such as exception handling and automatic task synchronization with other contexts such as the UI thread. Also, this process of composition of tasks with continuations, although elegant in its simplicity, is limited in its ability to be extremely flexible in decision making, and in other complicated constructs such as batching of results and decision making of continuation paths based upon results from finished tasks. These actions are more suited for an additional framework, the Task Dataflow Library.
Thread.Sleep(1000); return a + b;
});
Console.WriteLine("Waiting for completion"); var task1 = Task<int>.Factory.StartNew(() => f(1, 2)); var task2 = Task<int>.Factory.StartNew(() => f(3, 4)); var task = Task.WhenAll(task1, task2) .ContinueWith(
tasks => f(tasks.Result[0], tasks.Result[1]));
task.ContinueWith(t => Console.WriteLine(t.Result));
Console.WriteLine("Continuing on to do some other work"); Console.ReadLine();
The program starts the two tasks, each similarly to the earlier example, but passing different values to each. It then uses the Task.WhenAll static method of the Task class to wait for all specified tasks to complete. When all tasks are complete, the TPL will execute the specified method. This is actually executed as another task. This continuation task is passed an array of integers, one for each task passed into the Task.WhenAll method.
66
Also not covered, the .NET 4.5 compilers provide the async / await constructs, which simplify this model of TPM code with continuations even more by moving providing direct compiler support for handling continuations with closures of code in the same method. But that’s a story for another article… For further coverage of these concepts, additional patterns of TPL, Task Dataflow, and async / await, come to my session at NDC 2013 and/or check out my blog at 42Spikes.com.
Michael Heydt is Principal in the Capital Markets Advanced Technology Practice for SunGard Global Services, the worlds leading provider of software for financial trading organizations.
Join the USER GROUPS at NDC`s community area Whether you want to record a podcast, participate in a hackaton or simply just kick-back and relax in a Buddha bag, there are lots of activities to get involved with at the community.
CAFĂ&#x2030; Here you will find the ideal place to unwind in a relaxing atmosphere. The lean-cafĂŠ with an open space awaits you with a barista, fabulous Buddha bags, and easy access to partake in multiple activities going on at the stage. Share your knowledge and exchange your impressions with other developers. THE STAGE At the stage you can witness live programming, record a podcasting and listen to short speeches. CODING Put your coding skills to the test by participating in a hackaton, coding dojo or a coding competition. Find and join us at the NDC venue
Having been involved with Domain Driven Design (DDD), Command Query Responsibility Segregation (CQRS) and Event Sourcing (ES) for a few years, I’ve helped various clients understand these technologies better, and implement them where appropriate to leverage tangible benefit. Along the way, I’ve seen various approaches and conceptions that are somewhat misguided, resulting in wasted time and effort.
DDD, CQRS, ES
Misconceptions and Anti-Patterns By Ashic Mahtab
Sometimes this is so severe that the affected teams ended up in the situation they specifically wanted to avoid – a complicated mess that people are fearful of touching because who knows what will happen. In this article, we will look at a few of these misconceptions. TACTICAL DDD WITHOUT STRATEGY When talking about DDD, a lot of people think of Repositories, Specification Patterns, and Domain Services. A lot of discussion revolves around abstractions to maximise code reuse – how we can have a generic repository that takes generic specifications to allow us to whip up new data access objects quickly, how we can have a re-useable validation mechanism to write validation rules once in the “Domain” and have some strange magic generate those rules in JavaScript so that they can be automatically applied in web based forms, how we can ensure fast querying of complicated domain models via the use of some second level cache. This does not stop with mere discussions – some of the most talented developers burn candles at both ends writing tools and frameworks to achieve these purposes. The tactical side of DDD – relating to implementation – is fun to work with. It is fun to work with 68
for technical folk because it is a technical domain. It is what developers grasp easily and understand. Notice how all the points just mentioned are generic. There is no business context to any of this (well…unless you’re building a framework to “do” DDD). Eric Evans’s Big Blue Book outlines some technical implementations. These may or may not suite different scenarios. However, the book goes on to talk about things like Strategic Design, bounded contexts, ubiquitous languages, context maps, integration approaches. This is the strategic part of DDD. And this is where most of the power of DDD lies. This is lost to many – they read the first part of the book and go off and build technical solutions that are not strategically sound. It is often said that the best way to read the Blue Book is to start at chapter 11, read to the end, and then read from start to finish. If you are to use DDD, focus strongly on the strategic elements. The value of these will endure, while technical approaches may come and go. THE UBIQUITOUS LANGUAGE AND BOUNDED CONTEXTS The idea of the ubiquitous language is quite often misunderstood. Many think this to be a common language that developers, testers, business
analysts, business users, and even consumers share. And that there is one ubiquitous language for the whole organisation. In reality, it is futile to get different parts of an organisation to agree to one shared language for everything. The same concept usually means different things to different people within the same company. Attempts to establish a shared vocabulary merely results in people creating a mental map from the term they use to the policy-enforced terminology. This beats the whole purpose of the ubiquitous language. A better way of leveraging the ubiquitous language is to acknowledge the fact that different groups within an organisation will use different terminology. This will enable us to draw linguistic boundaries, and have one ubiquitous language for each boundary. We can then have one model per boundary, instead of attempting to have one model to rule them all. If you haven’t guessed already, this boundary represents a bounded context. Technical Standardisation I have often seen organisations promote a standard set of tools and practices as an organisational standard. Sometimes teams are set up to build generic subdomains, and committees set up to enforce them.
CQRS – ONLY FOR HIGH PERFORMANCE / DISTRIBUTED SYSTEMS DDD, CQRS, ES, and other patterns bring different pros and cons, depending on how they are implemented, and which combinations are used. A common misconception is that CQRS is only applicable for high perfor-
mance and / or distributed systems. A big benefit of CQRS is a drastically simplified domain model. Depending on implementation approach, it can encourage fewer levels of needless abstraction, friendly task-based UIs, easier querying, better maintainability. These benefits are not limited to high performance scenarios or distributed computing.
we can run all projections async, but handle commands in sync. In the case where that still isn’t fast enough (and we can’t improve performance otherwise), we would think of relinquishing transactional consistency. CQRS is intended to simplify, not “complect”. Async from the start can require a lot of work, negotiation, and effort. Don’t focus on it if it’s not needed.
TRANSACTIONAL CONSISTENCY A common complaint is that CQRS prevents us having transactional consistency – since commands fire asynchronously and/or read models update asynchronously, we cannot have immediate consistency. For a complex but small application, this looks like a drawback. The fallacy here is that there is no hard and fast rule that says the aforementioned operations need to happen asynchronously. If the performance metrics are within acceptable limits, we can run the whole thing inside a single local transaction and prevent any read model latency issues. Or we can run some read model updates in sync, while others asynchronously. If performance is still not good enough,
BUTTERFLY EFFECT People often refer to the butterfly effect – one small thing in one place causes massive changes all over the place. This is usually a result of complex architectural topologies that have intimate knowledge of each other. This in turn is a result of trying to apply the same model to the whole organisation. CQRS should be used within a bounded context, and integration with others needs to be modelled and maintained properly. It should be possible to switch from CQRS to some other mechanism and vice versa within a context without affecting anything outside. If such boundaries are maintained, a fluttering of the butterfly’s wings should not be a cause of major alarm.
© Shutterstock
Significant negotiation is needed to get sign off before even starting to build something. This results in nothing more than immense amounts of waste. Different bounded contexts will have different needs, and they need to be developed according to those needs. While some amount of common sense can be used to reduce the number of technologies needed, forcing everything to use a single approach promotes the wrong idea, and consequently the wrong result. As long as a bounded context has a stable manner of integration with others (be it with a published language or otherwise), there should not be policies enforcing how it should be implemented internally. Such policies bring about the need for system wide rewrites when new technologies are needed.
69
EVENT SOURCING – ONLY AN EVENT LOG Event sourcing gives us more than a mere event log. It provides a single source of truth, enables hydration of aggregates without exposing state, better tests which can generate documentation, gives us the opportunity to roll back time for debugging, enables us to easily add more reports when needed, easier integration between components, and a whole host of other things. Performance wise, it can be “fast enough”, or in many cases faster than loading via queries that perform complex joins hopping 8 tables. Just because you don’t need an event log is not necessarily a reason to forgo this approach.
These are just some of the misconceptions around DDD, CQRS, and Event Sourcing out in the wild. If you are considering these approaches, ensure you make an informed decision as to which one(s) you use, and how you go about doing it.
APPLICABILITY It is unfortunately quite common to start using technologies in projects because they’re “cool”. The worst decision one can make is to use the wrong approach to solve a problem. DDD, CQRS, and ES all have different benefits. Yet there is an amount of effort required when leveraging them. If a context does not provide competitive advantage, can be bought off the shelf, or easily implemented using something as simple as two-tier forms over data, then attempting to use DDD, CQRS, and / or ES will be a matter of immense waste. Such time is better spent in contexts that can deliver more business value.
Ashic is a .NET consultant based in London and an ASP.NET MVP since 2009. His experience ranges from real time fault monitoring systems to working for Her Majesty at Parliamentary ICT. When not messing about with code, he can be found moderating http://www.asp.net or having a rant on twitter. He is passionate about software design, messaging, DDD, CQRS, Event Sourcing and almost anything to do with software, and often blabbers about those things at various user groups and conferences. He is also the founder of the London ZeroMQ User Group.
© Shutterstock
SNAPSHOTTING Many see the replaying of all events to hydrate an aggregate as a potential problem. They look into snapshotting from the get go. In reality, snapshotting is rarely needed, and often counterproductive. Often, a measured
need for snapshotting can suggest a hidden domain concept. At other times, it can be because of incorrect aggregate boundaries. Before jumping onto snapshotting, measure and show the need, and verify that better modelling cannot produce better benefits.
70
MOVING FROM C++98 TO C++11
EFFECTIVE C++11 PROGRAMMING
Scott Meyers 2 DAYS TRAINING RVALUE REFERENCES, MOVE SEMANTICS, AND PERFECT FORWARDING
10 JUNE, OSLO 17 JUNE, LONDON SMART POINTERS
LAMBDA EXPRESSIONS
SECRETS OF THE C++11 THREADING API
Software developers familiar with the fundamentals of C++11 are ready to advance from knowing what’s in C++11 to understanding how to apply it effectively. This seminar, based on information in Scott Meyers’ forthcoming Effective C++11, highlights the insights and best practices of the most accomplished C++11 programmers: the things they almost always do (or almost always avoid doing) to produce clear, correct, efficient code.
The seminar is based on lecture presentations, with ample time for questions and discussion by attendees. There are no hands–on exercises, but participants are welcome – encouraged! – to use their C++ development environments to experiment with the ideas in the seminar as they are presented. 10 JUNE. Venue: Radisson Blu Plaza, Oslo 17 JUNE. Venue: Holborn Bars, London
SCOTT MEYERS Scott Meyers is one of the worlds foremost experts on C++ software development. He offers training and consulting services to clients worldwide. Scott wrote the best–selling Effective C++ series (Effective C++, More Effective C++, and Effective STL) and orchestrated their initial electronic publication (originally as Effective C++ CD, subsequently as DRM–free PDFs). Hes also author of Overview of the New C++ (C++11) and Effective C++ in an Embedded Environment. Scott founded and is Consulting Editor for Addison Wesleys Effective Software Development Series, and he conceived the boutique conferences The C++ Seminar and C++ and Beyond. He received the 2009 Dr. Dobbs Excellence in Programming Award. A programmer since 1972, he holds an M.S. in Computer Science from Stanford University and a Ph.D. from Brown University.
OSLO - www.programutvikling.no
LONDON - www.developerfocus.com
Usability—as defined in wikipedia—is the ease of use and learnability of a human-made object. The object of use can be a software application, website, book, tool, machine, process, or anything a human interacts with.
An very brief introduction to
API Usability By Giovanni Asproni
When we talk about it in the software world, we typically refer to how easy it is for end users to interact with an application through a graphical user interface (GUI). The easier for a user is to learn how to use the application to achieve his goals, the more usable the application is. However, the same idea can be applied to APIs—in this context an API is any well-defined interface that defines the service that one component, module, or application provides to other software elements (http://www.ufpa.br/cdesouza/pub/ p390-desouza.pdf)—we just need to replace “application” with “API” and “user” with “programmer”: the easier is for a programmer to learn how to use the API to achieve his goals, the more usable the API is. Note that, based on the definition given above, all programmers are API writers. In fact, when working on non trivial systems, we always end up implementing one or, more often, several APIs to accomplish our tasks, either because the product itself is a library or framework to be used by third parties, or because we need to group together related pieces of functionality in frameworks, packages and libraries that are then used to build our applications. Despite the fact that the literature on API usability is not very rich, the sub-
72
ject is very important since a usable API helps • To avoid many mistakes that can introduce bugs in applications • To increase productivity by making it easier to implement functionality better and faster • To keep the code clean and maintainable • Newcomers to learn the ropes of a new application more quickly Most programmers have an intuitive grasp of usability. However, very few think of it in a structured way and know how to evaluate the implications their design choices have on it in an objective manner—we often think our code is perfect until someone else tries to use it and then gives us feedback that makes us reconsider our decisions. In the following I’ll briefly introduce some of the basics. AFFORDANCES A central concept when talking about usability is the concept of affordance, i.e. (http://en.wikipedia.org/wiki/ Affordance): A quality of an object, or an environment, that allows an individual to perform an action.
For example, a door affords to be opened and closed, or, for an example in a software context, the java.io API affords reading and writing a file. Affordances are not always obvious. An everyday example of that are doors that should be pushed to be opened, but have a handle that invite us to pull them instead. For an equivalent software example, try to read a text file line by line using Java 6 or earlier—it is definitely possible, but certainly not intuitive (I’ll leave this as an exercise to the readers). The more visible the affordances are, the more usable the API will be. COGNITIVE DIMENSIONS Measuring usability is a complex process. The following is a list of some (not all) important cognitive dimen-
© Shutterstock
sions that affect the usability of an API • Abstraction level. The minimum and maximum levels of abstraction exposed by the API • Working framework. The size of the conceptual chunk (developer working set) needed to work effectively • Progressive evaluation. To what extent partially completed code can be executed to obtain feedback on code behaviour • Penetrability. The extent to which a developer must understand the underlying implementation details of an API • Consistency. How much of the rest of an API can be inferred once part of it is learned Ideally, a very usable API would expose very few abstraction levels, would have a small working
framework for each piece of functionality, would allow progressive evaluation, would require little or no understanding of the underlying implementation details, and would be very consistent. In practice, when designing an API there will always be other constraints that need to be taken into account, like performances, threading, memory footprint, etc., which sometimes have a negative impact on usability and make it necessary to find some compromises. CONCLUSION I have introduced some important usability concepts only very briefly— this subject cannot be treated in depth in a short article—but I hope to have stimulated your curiosity enough both to look for more, and
to start thinking about usability in a more structured way during your programming activities. I’ll be presenting a session about writing usable APIs in practice at NDC Oslo 2013, where I will present also some techniques that can you can readily apply to your projects, as well as give some references to further information. I hope to see you there.
Giovanni is a consultant specialised in helping companies and teams to become more effective at producing and delivering high quality software. He is a contributor to the book “97 Things Every Programmer Should Know” published by O’Reilly. More about him at http://www.asprotunity.com
73
Partners
organizers
We are proud to present the sixth NDC Thanks to our partners who make it happen! For information about partnership deals, please contact: henriette.holmen@programutvikling.no tel.: +47 976 07 456
ndcoslo.com
Become a partner today!
COURSE OVERVIEW OSLO COURSETITLE PRE-CONFERENCE WORKSHOPS AT NDC
Jul
Aug Sep Oct Location
Price
Accelerated Agile: from months to minutes - Dan North
2
10
Oslo Plaza
8900
Behviour-Driven Development - Liz Keogh
1
10
Oslo Plaza
5900
Building Windows 8 Apps - Rocky Lhotka
1
11
Oslo Plaza
5900
Claims-Based Identity & Access Control for .NET 4.5 Applications - Dominick Baier
2
10
Oslo Plaza
8900
Clean Architecture - Robert C. Martin
1
10
Oslo Plaza
5900
Continuous Integration and Delivery Workshop - Hadi Hariri
1
10
Oslo Plaza
5900
Day of Clojure - Stuart Halloway
1
10
Oslo Plaza
5900
Day of Datomic - Stuart Halloway
1
11
Oslo Plaza
5900
# and .Net - Jonas Follesø/Chris Hardy
1
10
Oslo Plaza
5900
Embracing Uncertainty - Liz Keogh
1
11
Oslo Plaza
5900
Hacking .NET(C#) Application: A quick dip
1
11
Oslo Plaza
5900
JavaScript: Getting your feet wet - Dr. Venkat Subramaniam
1
10
Oslo Plaza
5900
JavaScript: Taking a Deep Dive - Dr. Venkat Subramaniam
1
11
Oslo Plaza
5900
ReSharper Workshop - Hadi Hariri
1
11
Oslo Plaza
5900
TDD Overview - Robert C. Martin
1
11
Oslo Plaza
5900
Develop Mobile Applications with C
AGILE
Days Jun
Agile Estimating and Planning - Mike Cohn
1
BDD - Specification by Example - Gojko Adzic
3
Coaching Agile Teams Course - Lyssa Adkins & Michael K. Spayd
2
Effective User Stories for Agile Requirements - Mike Cohn
1
SCRUM
2
Certified Scrum Product Owner - CSPO - Mike Cohn
2
Certified ScrumMaster - CSM - Geoff Watts
2
Certified ScrumMaster - CSM - Mike Cohn
2
Smidig utvikling med Scrum - Arne Laugstøl
1 Days Jun 5
03
3
05
Whole Team Approach to Agile Testing - Janet Gregory DESIGN - ANALYSIS - ARCHITECTURES
5
Evolutionary Design and Architecture for Agile Development - Dr. Venkat Subramaniam
5
Agile Design and Modeling for Advanced Object Design with Patterns - Craig Larman
4
Architecture Skills - Kevlin Henney
3
Designing User Experiences: Principles and Techniques for Developers, Managers and Analysts - Billy Hollis
3
The Architect`s Master Class - Juval Lowy
5
MOBILE APPLICATIONS
Days Jun
Price
17
Felix, Oslo
14900
09
IT Fornebu
6500
Aug Sep Oct Location
18900
IT Fornebu
24900
IT Fornebu
18900
04
Price
IT Fornebu
18900
IT Fornebu
18900
Aug Sep Oct Location
02
Price
IT Fornebu
Aug Sep Oct Location
02
Jul
7500
14900
3
Evolutionary Design and Architecture for Agile Development - Dr. Venkat Subramaniam
IT Fornebu
14900
18
Jul
14900
IT Fornebu
26
Days Jun
18900
IT Fornebu
Felix, Oslo
14 Jul
IT Fornebu
14900
14
Days Jun 3
Jul
7500
IT Fornebu
19
03
Price
Felix, Oslo
Aug Sep Oct Location
05
3
BDD - Specification by Example - Gojko Adzic
Jul
17
Test-Driven Development - Venkat Subramaniam TESTING
Aug Sep Oct Location
07
Test-Driven Development & Refactoring Techniques - Robert C. Martin Test-Driven JavaScript - Christian Johansen
Jul
26
Days Jun
Certified Scrum Product Owner - CSPO - Geoff Watts
TESTDRIVEN DEVELOPMENT
76
Days Jun
Price
IT Fornebu
24900
Kongsberg
24900
IT Fornebu
21900
London
18900
London
19900
London
24900
Aug Sep Oct Location
Price
Android Developmer Training - Wei-Meng Lee
5
30
IT Fornebu
24900
MOB101 - 3-Day Writing Cross Platform iOS and Android Apps using Xamarin and C# - Wei-Meng Lee
3
25
IT Fornebu
18900
www.programutvikling.no
Aral Balkan’s Modern iOS Development Designing the mobile user experience - Aral Balkan MICROSOFT
3
26
2
24
Days Jun
70-513 - WCF 4.5 with Visual Studio 2012 - Sahil Malik
#
C .NET: Utvikling av applikasjoner i .NET med C
# - Arjan Einbu
24 Jul
02 03
18900
IT Fornebu
14900
Aug Sep Oct Location
5 5
IT Fornebu
19
Price
IT Fornebu
24900
07
IT Fornebu
24900
ASP.NET Web API & SignalR: Lightweight Web-Based Architectures for you!
2
30
IT Fornebu
14900
Claims-based Identity & Access Control for .NET 4.5 Applications - Dominick Baier
2
14
IT Fornebu
14900
Creating Windows 8 Apps using C
3
08
IT Fornebu
18900
IT Fornebu
18900
29
IT Fornebu
21900
14
IT Fornebu
24900
IT Fornebu
18900
IT Fornebu
21900
IT Fornebu
24900
# and XAML - Gill Cleeren
Creating Windows 8 Apps using HTML5 and JavaScript - Christian Wenz
3
Enterprise Development with NServiceBus - Daniel Marbach
4
Web Development in .NET - ASP.NET MVC , HTML5, CSS3, JavaScript - Scott Allen/Arjan Einbu
5
Programming Mobile sites with ASP.NET
3
WPF/XAML - 70-511 / 10262A Windows Presentation Foundation/XAML - Arne Laugstøl
3
Zero to Microsoft Business Intelligence - Peter Myers SHAREPOINT SharePoint 2010 and Office 365: End to End for Developers and Designers - Sahil Malik SharePoint 2013 and Office 365: End to End for Developers - Sahil Malik JAVA
26
17
11 20
5 Days Jun
Jul
Aug Sep Oct Location
5
Price 24900
5
09
Days Jun
IT Fornebu
Jul
Aug Sep Oct Location
01
21
24900 Price
Core Spring - Mårten Haglind
4
IT Fornebu
21900
Effective JPA - Industrial strength Java Persistence with JPA 2.0 and Hibernate
2
IT Fornebu
14900
Effective Scala - Jon-Anders Teigen
3
IT Fornebu
18900
Play 2.0 for Java - Mårten Haglind
2
IT Fornebu
14900
Programming Java Standard Edition - Peet Denny
5
IT Fornebu
24900
Spring and Hibernate Development - Mårten Haglind
5
IT Fornebu
24900
HTML5 - JavaScript - CSS3
Days Jun
JavaScript and HTML5 for Developers - Christian Wenz
3
JavaScript for programmers - Christian Johansen
3
Practical CSS3 - Chris Mills
2
Test-Driven JavaScript with Christian Johansen
3
Writing Ambitious Webapps with Ember.js - Joachim Haagen Skeie
3
C++
4
C++-501: C++ for Embedded Developers - Mike Tarlton
5
C++11 programming - Hubert Matthews - IT-Fornebu
3
C++11 programming - Hubert Matthews - Kongsberg
3
Effective C++11 Programming - Scott Meyers
2
Programming in C++ with Mike Tarlton
4
Deep C: Et kurs for erfarne C og C++ programmerere - Olve Maudal XML Exchanging and Managing Data using XML and XSLT - Espen Evje DATABASE Databasedesign, -implementering og SQL-programmering - Dag Hoftun Knutsen PROGRAMMING Objektorientert utvikling - Eivind Nordby EFFECTIVE CONCURRENCY Effective Concurrency - Herb Sutter (26th-28th March 2014)
01
21
Jul
Aug Sep Oct Location 10 21
5
Days Jun
Advanced C++ programming - Hubert Matthews
C
10
16 18
Jul
IT Fornebu
18900
IT Fornebu
18900
24900
02
IT Fornebu
18900
Kongsberg
18900
Oslo Plaza
14900
IT Fornebu
21900
Aug Sep Oct Location Trondheim
Jul
Aug Sep Oct Location
3
09
Days Jun
Jul
22 Jul
4 Days Jun
Jul
IT Fornebu
Aug Sep Oct Location
4
Days Jun
Price 21900
16
Days Jun
3
14900
IT Fornebu
2
4
18900
IT Fornebu
IT Fornebu
10 Jul
IT Fornebu
16 21
Days Jun
18900
Aug Sep Oct Location
17
Price
IT Fornebu
IT Fornebu
Aug Sep Oct Location 3
IT Fornebu
Aug Sep Oct
Location London
Price 14900 Price 18900 Price 21900 Price 21900 Price 17900 77
COURSE OVERVIEW LONDON COURSETITLE AGILE & SCRUM
Jul
Aug Sep Oct
Location
Price
Holborn Bars
£1195
Canary Wharf
£700
02
Holborn Bars
£1195
02
Holborn Bars
£1195
Holborn Bars
£1400
Holborn Bars
£1195
2
Holborn Bars
£1400
Effective User Stories for Agile Requirements Training - Mike Cohn
1
Canary Wharf
£700
PMI ® Agile Certified Practitioner (PMI-ACP) - Didier Soriano (Bort)
2
30
Holborn Bars
£700
Succeeding with Agile - Mike Cohn
2
26
Holborn Bars
£1400
Working on a Scrum Team with Kenny Rubin
3
21
Holborn Bars
£1400
Writing Effective User Stories Training with Kenny Rubin
2
24
Holborn Bars
£1195
Advanced Scrum Master Course - Geoff Watts & Paul Goddard
2
Agile Estimating and Planning Training - Mike Cohn
1
Architecture with Agility - Kevlin Henney
3
Certified Scrum Master - CSM - Geoff Watts
2
Certified Scrum Master - CSM - Mike Cohn
2
Certified Scrum Product Owner - CSPO - Geoff Watts
2
Certified Scrum Product Owner - CSPO - Mike Cohn
ARCHITECTURE - DESIGN - SECURITY - MASTERCLASSES
04
10
08
Days Jun
Jul
Aug Sep Oct
Location
Price
09
Holborn Bars
£1495
Holborn Bars
£2500
Advanced Windows Security Masterclass with Paula Januszkiewicz
3
Architecture Clinic with Michael ‘Monty’ Montgomery
5
The Architect`s Master Class with Juval Lowy
5
Holborn Bars
£2500
The Cloud Architect Master Class with Shy Cohen
5
Holborn Bars
£2500
Windows Communication Foundation (WCF) Master Class with Miguel Castro (Bør byttes til annet kurs,SignalR)
5
16
Holborn Bars
£2500
Aug Sep Oct
Location
C++
07
Days Jun
Advanced C++ Programming - Hubert Matthews
4
Effective C++11 Programming - Scott Meyers
2
MICROSOFT .NET
78
Days Jun
Jul
17
Days Jun
Jul
Aug Sep Oct
Price
Holborn Bars
£1795
Holborn Bars
£1400
Location
Price
Building Applications with ASP.NET MVC 4 - Scott Allen
3
Holborn Bars
£1495
Building Applications with ASP.NET MVC 4/ HTML5 Course - Scott Allen
5
Holborn Bars
£1995
C #.NET: Developing applications in .NET with C # - Gill Cleeren
5
Creating Windows 8 Metro Apps using C# and XAML - Gill Cleeren
3
Design Concepts, Architecture, and Advanced Capabilities for WPF - Billy Hollis
4
£1795
WCF - Windows Communication Foundation (WCF) 4 with Sahil Malik
5
£1995
Working with HTML 5, CSS 3, and JavaScript - Scott Allen
2
30
£1995 15
Holborn Bars
Holborn Bars
£1495
£1195
DeveloperFocus LTD, London by ProgramUtvikling AS
DeveloperFocus
www.developerfocus.com
MOBILE APPLICATIONS
Days Jun
Jul
Aug Sep Oct
Location
Price
Building Android Apps - John Denny
4
Farringdon
£1795
Fundamentals of Android Programming with Wei-Meng Lee
5
Farringdon
£1995
Fundamentals of iOSProgramming with Wei-Meng Lee
5
Farringdon
£1995
Mono for Android Course with John Sonmez
2
£1195
MonoTouch and Mac Development Course with John Sonmez
3
£1495
MonoTouch and Mono for Android Course with John Sonmez
5
£1995
JAVA
Days Jun
Jul
Aug Sep Oct
Location
Price
Core Spring 3.0 - Kaare Nilsen
4
£1795
Advanced Java Programming with Martijn Verburg
1
£700
Java Virtual Machine (JVM) with Ben Evans
1
06
Understanding Advanced Java & The Java Virtual Machine (JVM) with Ben Evans
3
10
Object-Oriented Programming in Java - John Denny
5
15
Spring and Hibernate Development - Kaare Nilsen
5
JAVASCRIPT - HTML
Days Jun
£700 11
£1495 £1995 £1995
Jul
Aug Sep Oct
Location
Price
Mobile Web Apps with Remy Sharp
1
£700
Node & HTML5 for a real-time web with Remy Sharp
2
£1195
JavaScript and HTML5 for Developers - Christian Wenz
3
Test-Driven JavaScript - Christian Johansen
3
Using HTML5 and JavaScript to build Web Apps with Remy Sharp
Days Jun
SharePoint 2013 and Office 365: End to End for Technical Audience - Sahil Malik
Silverlight 5 Workshop - Gill Cleeren USER EXPERIENCE & DESIGN
Days Jun
£700 Jul
Aug Sep Oct
Location
02
Holborn Bars
Jul
Aug Sep Oct
Location
Jul
Aug Sep Oct
Location
Price
04
Holborn Bars
£1495
Holborn Bars
£2500
4 Days Jun 3
Developing Software That Doesn’t Suck: A Master Class on User Experience Design with David Platt
5
Effective Concurrency - Herb Sutter (26th-28th March 2014)
£1495
5
Designing User Experiences: Principles and Techniques for Developers, Managers, and Analysts - Billy Hollis
EFFECTIVE CONCURRENCY
£1495
1
SHAREPOINT
SILVERLIGHT
10
Days Jun
Price £1995 Price £1795
14 Jul
Aug Sep Oct
3
Location Holborn Bars
Price £1995
ProgramUtvikling
DeveloperFocus
The office and course rooms are located in the IT-Fornebu technology park approximately 10 minutes away from central Oslo. Address: Martin Lingesvei 17-25, 1367 Snarøya.
Courses are held at De Vere's Holborn Bars, 138-142 Holborn, London EC1N 2NQ. Office address: Suite G4, 5 St John's Lane, London EC1M 4BH.
Tel.: +47 67 10 65 65 - Fax: +47 67 82 72 31 www.programutvikling.no - info@programutvikling.no
Tel.: +44 0843 523 5765 www.developerfocus.com - info@developerfocus.com
DeveloperFocus
79
Photo: Shutterstock
Training for developers and leaders in Oslo, London or wherever you like ProgramUtvikling offers the best courses and most flexible training to the developer community wherever you want us to. Our philosophy is to provide “practical, applicable knowledge” by providing the highest quality training with the best instructors.
OSLO - www.programutvikling.no
In addition to our permanent staff we have a number of exciting and well-known instructors such as Herb Sutter, Craig Larman, Billy Hollis, Mike Cohn, Geoff Watts, Gill Cleeren, Sahil Malik and a great staff of excellent Scandinavian instructors.
LONDON - www.developerfocus.com
NDC has become one of the largest conferences dedicated to .NET and Agile development and is hosted by ProgramUtvikling AS. www.ndcoslo.com www.ndclondon.com HTML5 • Javascript • Mobile • Agile • Scrum • Design .NET • Java • Architecture • TDD • SharePoint • C++
80
Agenda and Practical Information All you need to know about NDC 2013
>>>
• 3-day conference with 8 parallel tracks • 2 days of Pre-conference Workshops
ndcoslo.com 81
THE AGENDA COMMITTEE Is pleased to announce the 6th annual Norwegian Developer Conference, which will take place at the Oslo Spektrum venue. By Charlotte Lyng
Even though the NDC 2012 was an unqualified success, this year`s Agenda Committee are confident that this year`s conference will be the best ever held. As a result of all the amazing feedback from former
NDC participants, the committee has not only been able to find the bestknown speakers in the industry, but also create an extensive program that will allow you to pick sessions suited to your particular interests. The com-
mittee members hope that this year`s NDC will make you feel inspired and confident to excel in the fast paced industry of software development.
TORSTEIN BJØRNSTAD, SENIOR CONSULTANT AT WEBSTEP Torstein Bjørnstad is a consultant focused on helping clients take use of modern web technology in order to provide rich and interactive user experiences to their users. He spends his spare time diving into and evaluating new technologies in his many fields of interest. This is the first year that Mr Bjørnstad is part of the NDC programme committee, and he is brought in to represent the views of developers working with the .NET stack.
JONAS FOLLESØ, MANAGER AND SCIENTIST IN BEKK Jonas Follesø spends part of his time as a scientist where he is busy in the Norwegian professional environment, either as a lecturer/article author or similar activities. In addition, he works as a consultant and developer, where mobile is one of the areas he is particularly interested in. This is the second year that Mr Follesø is a member of the NDC programme committee, and thinks that it is an incredibly exciting, stressful and challenging task, in addition to having had an extra focus on mobile as a theme.
also an internationally known conference with the focus on smooth development and alternative thinking on the Microsoft and. NET platform. NDC has also developed into being a conference which is not only relevant for .NET developers, but rather everyone who works with development in one form or another”, says Mr Follesø.
“I have attended NDC the last three years, and I think it has is a conference for all kinds of developers. While it used to be more targeted for the .NET audience, I think even more kinds of developers are going to enjoy the program we have put together this year.”
“NDC has grown every year and when we have such an extreme number of qualified lecturers and course holders in Oslo, it would be silly not to use the opportunity to really go into depth with some themes using the pre-conference workshops”, says Mr Follesø. “NDC has developed not only into being an important arena in Norway, but in time
82
KJERSTI SANDBERG, GENERAL MANAGER OF PROGRAMUTVIKLING Kjersti Sandberg is the founder of Programutvikling AS and has organized the NDC Conference from the beginning. Her daily job is with the professional areas of sale, marketing and business development. Her role on this year’s Committee has been administrative, have communication with speakers and present good ideas. “The fact that NDC has expanded into a whole week allows you to form your own professional tapas. This must be the peak for those who will get the most out of five knowledge days in an exciting, enjoyable and social way” she adds.
TORE VESTUES, MANAGER OF COMPETENCE AND QUALITY IN BEKK Tore is a developer at heart, but his curious nature and drive for understanding the entire life cycle of software development has made him fill many different roles. In addition to being a developer he has taken roles such as test manager, quality manager, project manager and lead architect. Tore describes himself as obsessed with quality, and a strong believer in the agile mindset. “Many has felt as if agile has been in a standstill for a while,” Tore says, “This isn’t quite true. Agile is still a major influence, but these days it often appears as new trends under different names.” In the committe, Tore has an extra focus on capturing these trends. PETRI WILHELMSEN, DEVELOPER MARKETING MANAGER IN MICROSOFT Petri Wilhelmsen’s daily work is on the Evangelist team at Microsoft Norge. There he is responsible for reaching out to developers in Norway with Microsoft technology for developers, as well as working closely with universities and schools to provide them with knowledge about Microsoft technology. In addition, he is a very active coder and has a passion for graphics programming, algorithms and game development. “My role has been to help NDC find lecturers, themes and sessions for the agenda. This is a big job where we on the Committee must sit down together, evaluate lecturers and which sessions we wish to select from Call for papers, distribute them during the day and make tracks. It has been incredibly rewarding working with such a competent gang as this year’s Committee”, says Wilhelmsen. He, also, illustrates how tough it has been to choose between the various abstracts.
“A lot of good abstracts have been received this year, and it has been incredibly difficult choosing among the best. We would like to have had more days and more tracks to get in everything we wanted. Unfortunately, this is not possible and we have had long evenings discussing who we wished to include”, he says. JAKOB BRADFORD, GENERAL MANAGER OF PROGRAMUTVIKLING AND NDC As General Manager for NDC, Jakob Bradford has had the main responsibility for seeing that they reach their goal with the agenda and completion of the Conference. “Both the number and quality of the abstracts have been very high this year. We have received more abstracts from more speakers than ever. The high quality meant that we increased with an extra track, so that this year there will be eight parallel tracks. The number of speakers and abstracts shows the position NDC has as we enter our fifth year. Now I am very much looking forward to being present at the pre-Conference days. SVEIN ARNE ACKENHAUSEN, INDEPENDENT CONSULTANT AND LECTURER Svein Arne Ackenhausen works in Contango Consulting AS, a small consultancy company that delivers software services and training. Here he works mainly with .NET development assignments and courses. In addition to his daily consultancy work he works with the products ContinuouTests (Mighty Moose), AutoTest.Net and OpenIDE. “NDC has grown into being a recognised conference and I have to say that the standards are set higher than in previous years. It is incredibly enjoyable to be on the Commit-
tee. It is not every day that one can select the best technical lecturers in the world. I felt a little like a three-year-old in a candy store”, says Ackenhausen. “One should register for the Conference because it is unique opportunity to learn a lot of the most exciting things that are taking place in development today from several of the best in the industry. From experience, and as mentioned by several of the speakers, it is a conference with a very good atmosphere. There is a lot to learn both in the sessions and outside through exciting discussions during breaks and in the evenings”. BODIL STOKKE, SERIAL CONFERENCE ORGANISER AND DEVELOPER AT COMOYO Bodil is a frequent speaker at developer conferences internationally, and helps organise the We b R e b e l s a n d flatMap(Oslo) conferences, and two local meetup groups, in addition to serving on the agenda committee for NDC. She is a prolific contributor to the Free Software community, primarily as a Clojure developer, and has recently taken up designing new programming languages as a hobby. In her spare time, she works as a web developer for Comoyo. "While a majority of talks are still focused on Microsoft technologies, my aim has been to help NDC lose its reputation for being a purely .NET conference and diversify into areas like the Web and functional programming. I'm pleased to report that in addition to the regular agile and enterprise fluff, this year's NDC has also gathered one of the best lineups for a functional programming conference I've seen in a long time—not to mention the large number of stellar Web technology speakers. This will be the first year the schedule is crammed completely full with things I, as a non-Microsoft developer, am excited about," says Ms Stokke.
83
Violet Road, consisting of (clockwise from top left) the brother quartett Hogne, Håkon, Herman and Halvard Rundberg and vocalist Kjetil HolmstadSolberg, enters the Oslo Spektrum stage in june at NDC 2013. Photo: Knut Aaserud
D N E T T A T S U J DON’T ! T I E C N E I R E P X E NDC. Having been at NDC before, one simply does not come back just with expectations of great sessions and minglingwith peers. You come back expectinga great total experience, with tasty food, fun entertainment and great music as well. And NDC 2013 will not disappointwith its line up. by Inger Renate Moldskred Stie
CULINARY CELEBRATION How often do you get the chance to eat culinary breakfast, lunch and dinner from different restaurants, all in the same place? At NDC 2013, Programutvikling will, together with .netRocks and partners Miles, Systek, CGI and Computas, present you with a variety of culinary foods; served by the chefs from Flying Culinary Circus, who have been with us before. They continue to deliver tasty food, and at this year’s conference, they’ll present us with three different dishes as the days progress. You may find and eat a certain dish for breakfast, and visit the same restaurant later and find a new dish to taste. This triples the vast possibilities of food choice from the previous year. The restaurants range from Italian, Asian, American, Spanish, Norwegian and British cuisine. The latter is put together especially in honour and celebration of the new conference, NDC 2013 London, which will be held December 2nd–6th in the British capital. In addition, you’ll be served great coffee and pastry at the coffee house. TIME FOR NOSTALGIA During NDC there’s always a great party mood, using Oslo Spektrum for it’s true purpose; filling the grand room with great
84
music. And this year is no exception, as we look back and remember the beginning. Not the beginning of NDC, nor the beginning of .net, yet even further back. Back to the time when the home computers arrived, with all the excitement that followed. PRESS PLAY ON TAPE is a C64 revival band, (almost) exclusively playing tunes from the 80’s home computer Commodore 64 as rock on real instruments. They are here to entertain you after a day of great sessions and new knowledge, with familiar sounds presented in a new way. You do not want to miss this! VIOLET ROAD ENTERS OSLO SPEKTRUM Up and coming Violet Road, “a nest filled with four brothers and a vocalist with his voice outside the shirt” will perform and entertain at the conference. And from how they furthermore present themselves and their music, we can say this: Violet Road will serve you a mixture of “pop, melancholic moods, a reckless conviviality, beautiful harmonies, accordion, mandolin, saxophone, keys in all shapes and a touch of Swedish folk tradition”. Prior to NDC2013, the band has played for 1200 people at Tromsø Festival and at Rockefeller. They’ve even performed in
Informatics students at Kathmandu University
The University in Kathmandu, with Himalaya in the background.
NDC2013 SUPPORTS ICT-PROJECT IN NEPAL To ensure continuity, NDC2013 will continue to support the development of ICTcompetence in Nepal, as in previous years. The focus of this project is to improve and strengthen education, research and ICT-business establishment in Nepal, based on competence developed in Norway.
velopment, Nepal is currently a very small player. The schools on the ground levels are good, but there is a lack of good teaching and research at college and university level, as well as political instability, which contributes to Nepal not leveraging their potential to build ICT industry and create more wealth.
Nepal is famous for its high mountains and beautiful scenery, but also a land of great poverty and lack of much that we take for granted. While India has been a major power in the outsourcing of ICT de-
Professor Magne Jørgensen (Simula) and Stein Grimstad (Wasteless) have established a cooperation with Kathmandu University (KU), which includes a master’s course, supervision of Nepalese
Trond Svendgård (left to right), Tor Jørgen Kamprud Arnesen, Hans Kristian Larsen and Mathias Spieler Bugge in Flying Culinary Circus will, as in previous years, cook culinary foods for NDC 2013. And this year with a new challenge: Serving three different dishes as the days progress. Facsimile: fccircus.com
Photo: Vebjørn R. S. Olsen
the home of Åge Aleksandersen, who has gone from being a huge fan to a contributor to their music; Aleksandersencontributed in “Last Days In India”. Violet Road’s songs have been listed on P1 and played at both local and national radio for months. In June, they are ready to enter the stage of Oslo Spektrum and sing from their album “Every Peter and His Marching Band”.
master’s students, scholarships and a research cooperation. This is how they describe their efforts: – Through grants, contributions in teaching, equipment purchase and supervision of master students, we have over several years supported the construction of IT at Kathmandu University. This is an effort that requires long-term perspective and a lot of resources. The financial support from NDC, which has gone to equipment for distance learning, has been of great benefit.
LOVESHACK The band Loveshack, which is well known to many previous NDC participants will play for us. They’ve joined in on all of our former conferences, and is a must see at this years NDC as well. Loveshack will bring you many 80s classics, and set the mood for a grand summer party. Join us at NDC 2013 for a tasty, culinary food and cultural experience
Warming up after a great session day at NDC 2013 is PRESS PLAY ON TAPE with their fun and nostalgic music, inspired from Commodore 64-sounds. PPOT at Roskilde (from left to right): Søren Trautner Madsen, Theo Engell-Nielsen, André Tischer Poulsen, Uffe Friis Lichtenberg and Jesper Holm Olsen. Photo: Mette Kirstine Bie
85
The capital of Norway
VisitOSLO/Nancy Bundt©Vigeland-museet BONO
Oslo is a city full of contrasts. Its natural beauty, in combination with all the facilities of a modern metropolis, adds to the charm of a city often describedas “the world’s biggest village”. The surrounding countryside provides recreational opportunities for the city’s inhabitants and visitors all year round.
86
Skiforeningen/Linn Blekkerud
Oslo has a varied social and cultural life that should appeal to a wide range of people. The city offers an abundance of attractions, shopping possibilities and flourishing cultural life. Not many capitals offer subway services to the forest, enabling the general public to go hiking, canoeing, and fishing within city limits. Firsttime visitors are often surprised by the variety of restaurants and entertainment the city has to offer. Here are some suggestions for exciting and interesting experiences in Oslo: THE VIGELAND PARK The unique sculpture park is Gustav Vigeland (1869-1943) lifework with more then 200 sculptures in bronze, granite and cast iron. A monumental artistic creation with a human message that is well worth seeing. The park is open all year around. HOLMENKOLLEN NATIONAL SKI ARENA Is a historic landmark in the Norwegian consciousness and embodies more than a century of skiing competitions. Inside the ski jump is the Holmenkollen Ski Museum, the oldest of its kind in the world. The museum presents over 4,000 years of skiing
history, as well as Norwegian polar exploration artifacts. The observation deck on top of the jump tower offers a panoramic view of Oslo. In the summer, the arena is transformed into Oslo summer-park with downhill biking, climbing and zip line. VisitOSLO/Normanns Kunstforlag/Terje Bakke Pettersen
THE NORWEGIAN OPERA & BALLET Opera fan or not, this building in itself is worth a visit. Oslo’s new Opera House is shaped like an iceberg coming out of the water, and is the first opera house in the world to let visitors walk on the roof. Learn more about the architecture, stagecraft, opera and ballet through a 50-minute guided tour.
87
VisitOSLO/Terje Bakke Pettersen
FUGLEN Quite possibly the coolest retail concept in town. Fuglen (Norwegian for ‘The Bird’) is a coffee and vintage design shop by day and a vibrant cocktail bar by night. Come here to reveal in 1960’s Scandinavian design nostalgia and to try cocktails with a unique Norwegian twist.
88
FROGNERSETERN If you got time to make it all the way to end of tram #1 on a sunny day, then you should check this restaurant out before continuing onto the Holmenkollenski jump. Frognersetern is a comfortable authentic Norwegian house built in 1865 with an unforgettable view of Oslo and the fjord. This cool looking log building is the perfect place to try some traditional Norwegian dishes. NORWEGIAN FOLK MUSEUM The Norwegian Folk Museum is one of Europe’s largest open-air museums, with 155 traditional houses from all parts of Norway and a Stave Church from the year 1200. The museum’s indoor exhibits show traditional handcrafted items, folk costumes,
VisitOSLO/Nancy Bundt
VisitOSLO/Tor Morten Myrseth
GRUNERLØKKA What started as a bohemian escape has turned into on of the most trendy parts of the city. Grunerløkka, commonly referred to as “Løkka”, is known for its creative atmosphere and numerous little cafes, restaurants and bars. The borough is also a popular shopping district with original design shops and many vintage and second-hand stores.
Sami culture, weapons, toys, pharmaceutical history and other historic artifacts. The museum hosts events such as folk dancing, exhibitions, baking, church services, outdoor- market activities. HOW TO GET AROUND IN OSLO The Oslo Pass is your ticket to the city. It is the easiest, and most inexpensive way to experience the city. The Oslo Pass provides free travel on all public transport, free admissions to museums and sights, free parking in all Oslo municipal car parks, discount on car hire, Tusenfryd Amusement Park etc.
VisitOSLO/Nancy Bundt VisitOSLO/Nancy Bundt
VISIT NORWEGIAN WOOD ROCK FESTIVAL Norwegian Wood is a 4-day rock festival held in mid-June in Oslo. The festival always presents big international stars, but also unknown Norwegian bands performing at their first festival. Through the years, the main stage has hosted artists like Bob Dylan, Van Morrison, Sting, Bryan Ferry, David Bowie, Roger Waters, Tori Amos and
Lou Reed. On Thursday and Friday the concerts start in the afternoon, while Saturday and Sunday are filled with concerts the whole day until late in the evening. Friday has been established as the ”dark” day, with hard rock and metal on the programme.
PROGRAMME 2013: 13 JUNE: Keane/ Noah and the Whale 14 JUNE: Nick Cave & The Bad Seeds/ Band of Horses / I was a King 15 JUNE: Manic Street Preachers / My Bloody Valentine / El Cuero /Rival Sons 16 JUNE: Rod Stewart / Maria Mena / Jonas Alaska / Charles Bradley and His Extraordinaires /
Lee Bains III & The Glory Fires
Oslo-
in numbers
• Total area: 454 square kilometres • Population (2011): 600,000 • Forest area: 242 km2 • Park and sports arena area: 8 square kilometres • Lakes within the city limits: 343
• Islands within the city in the Oslofjord: 40 • Length of the Oslofjord: 100 kilometres
89
Pre-conference workshops
Accelerated Agile: from months to minutes
Claims-based Identity & Access Control
June 10-11th
for .NET 4.5 Applications - June 10-11th
+ Develop Mobile Applications with C# and .Net June 10th
CI and Delivery Workshop - June 10th
Day of Clojure - June 10th
ReSharper Workshop - June 11th
Day of Datomic - June 11th
ndcoslo.com
NDC 2013 is one of the Worldâ&#x20AC;&#x2122;s largest conferences dedicated to .NET and Agile development.
June 10-11th on top of Plaza! Plaza Panorama
1-day Workshop NOK 5.900,-
2-day Workshop NOK 8.900,-
Clean Architecture - June 10th
Behaviour-Driven Development - June 10th
TDD Overview - June 11th
Embracing Uncertainty - June 11th
Building Windows 8 Apps - June 11th
JavaScript: Getting your feet wet - June 10th JavaScript: Taking a Deep Dive - June 11th
PROGRAM – Wednesday Agile Mobile
TIMESLOTS 09:00 - 10:00
Cloud Programming Languages
Room 1
Room 2
Database Security
Room 3
Architecture Tools
Devops Design/UX
Room 4
Room 5
Room 6
Miscellaneous Web
Room 7
Microsoft Testing
Room 8
Workshop
The Science of Communities Behind Software Joel Spolsky
92
10:00 - 10:20
Break
10:20 - 11:20
Accelerating Agile: hyper- performing teams without the hype
Making Magic: Combining Data, Information, Services and Programming, at InternetScale
Continuously Deploying Complex Apps Doesn’t Have to Suck!
Scott Guthrie 1
Concurrent and High- Performance Programming in .NET with TPL, async/await, and Dataflow
Maintainable CSS - The Next Frontier of Front-End Engineering
SP2013 Workflows and you
Better Software — No Matter What
Dan North
Don Syme
Jeff French
Scott Guthrie
Michael Heydt
Kristofer Walters
Sahil Malik
Scott Meyers
11:20 - 11:40
Break
11:40 - 12:40
Effective Leadership: How to avoid anti-learning Agile advice
Clean Architecture and Design
Powershell for developers
Scott Guthrie 2
Code Digger, exploring input/ output of .NET methods
From requests to responses: a journey into the ASP.NET Web API runtime architecture
Enterprise Better Refactoring hipster-apps Software — Noda Time with SharePoint No Matter What (part 1) and JavaScript
Benjamin Mitchell
Robert C. Martin
Vidar Kongsli
Scott Guthrie
Jonathan Peli De Halleux
Pedro Félix
Jørn Are Hatlelid
Scott Meyers
SharePoint 2013 Search - What’s new and cool
Better Software — No Matter What
12:40 - 13:40
Lunch
13:40 - 14:40
How much is a great developer worth?
Succeeding with Functional-first Programming in Industry
Windows - Having its ass kicked by Puppet and PowerShell since 2012
Cloud Messaging with Node.js and RabbitMQ
The road to Atlantis - Right past that bend beyond Temporal Coupling Lane.
Backbone is supposed to give me structure, but everything is still just a mess
Magne Jørgensen & Stein Grimstad
Don Syme
Paul Stack
Alvaro Videla
Indu Alagarsamy
Hans Magnus Helge Grenager Inderberg & Kim Solheim Joar Bekkelund
14:40 - 15:00
Break
15:00 - 16:00
Patterns of Effective Teams
It could be heaven or it could be hell: Pleasure and peril of being a polyglot programmer.
(Re-) architecting for Continuous Delivery
Racing Thru the Last Mile: Cloud Delivery Web-Scale Deployment
Practical Publishing for Profitable Programmers
Ember.js in Action
Dan North
Venkat Subramaniam
Jez Humble
Alex Papadimoulis
Peter Cooper
Joachim Haagen Kevin Dockx Skeie
Jon Skeet
Scott Meyers
Windows 8 Better Building Store Apps – An Software — Applications Introduction No Matter What with ASP.NET MVC (Workshop)
Scott Meyers
Scott Allen
16:00 - 16:20
Break
16:20 - 17:20
Reintroducing Business Analysis into the Agile Stream and The Need for Structuring the Conversation with Stakeholders
All you need to know about TypeScript
You are not (only) a software developer! - Simplicity in practice
Brewing Beer with Windows Azure
Principles of Component Design.
Patterns of large-scale JavaScript applications
Live coding: The Better Windows Store Software — Apps showdown No Matter What - C# vs JavaScript
Building Applications with ASP.NET MVC (Workshop)
Howard Podeswa
Torstein Nicolaysen
Russ Miles
Maarten Balliauw
Robert C. Martin
Kim Joar Bekkelund
Iris Classon
Scott Allen
Scott Meyers
17:20 - 17:40
Break
17:40 - 18:40
++ Building Open Source Communities Through Social Architecture
Faking Homoiconicity in C# with graphs
Chef for developers
Reactive meta-programming with drones
TBA
How to cope with overnight success - Scaling your web app fast and cheap
From Windows Better Building Forms to WinRT Software — Applications No Matter What with ASP.NET MVC (Workshop)
Pieter Hintjens
Mark Seemann
Erling Wegger Linde
Jonas Winje, Einar W. Høst & Bjørn Einar Bjartnes
Lightning Talks
Gaute Magnussen
Rockford Lhotka
Scott Meyers
Scott Allen
PROGRAM – Thursday Agile Mobile
TIMESLOTS 09:00 - 10:00
Cloud Programming Languages
Room 1
Room 2
Database Security
Architecture Tools
Room 3
Room 4
Devops Design/UX
Room 5
Room 6
Miscellaneous Web
Room 7
Microsoft Testing
Room 8
Workshop
Real World Polyglot Persistence
Adopting Continuous Delivery
Hacking .NET(C#) Application: An Unfriendly Territory
Ground Control to Major Tom
Require JS
How to Change the World
Windows 8 Store Apps – From Turtle to Rabbit
Building clean and cohesive concurrent systems with F# agents
Jimmy Bogard
Jez Humble
Jon McCoy
David Nolen
Sebastiano Armeli Battana
Jurgen Appelo
Kevin Dockx
Simon Skov Boisen
UX by Developers
Hacking .NET(C#) Application: Code of the Hacker
Abusing C#
Creating Web Experiences with Users in Mind
Don’t let your process hide your ignorance
Windows Phone 8 – The advanced session
Spearheading the future of programming
The Seven Pillars Of Collaboration - Why agile teams need HISTORY in order to collaborate
Jon McCoy
Jon Skeet
Divya Manian
Liz Keogh
Iris Classon
Venkat Subramaniam
Geoff Watts
10:00 - 10:20
Break
10:20 - 11:20
CQRS Hypermedia with WebAPI
Anders Ljusberg Fredrik Kalseth
11:20 - 11:40
Break
11:40 - 12:40
Telephones and postcards: our brave new world of messaging
All projects should do usability testing!
Securing ASP. NET Web APIs and HTTP Services
Functional Programming You Already Know
Building Third-party Widgets and APIs using JavaScript
Your Path through Agile Fluency
Developing Games with Windows 8
The Seven Pillars Of Collaboration - Why agile teams need HISTORY in order to collaborate
Jimmy Bogard
Ram Yoga
Dominick Baier
Kevlin Henney
Torstein Bjørnstad
James Shore
Petri Wilhelmsen
Geoff Watts
12:40 - 13:40
Lunch
13:40 - 14:40
Successfully retrofitting extensibility into established software products
Information Alchemy: Presentation Patterns (& Anti-patterns)
Defensive Programming 101 v3
Erlang: a Being an jump-start for Anti-social .NET developers Geek is harmful
Make Impacts, Not Software
Sharing code with MVVM Light in Windows 8 and Windows Phone
People, Process, Tools – The Essence of DevOps
Jan Dolejsi
Neal Ford
Niall Merrigan
Bryan Hunter
Hadi Hariri
Gojko Adzic
Laurent Bugnion
Richard Campbell
14:40 - 15:00
Break
15:00 - 16:00
The Architecture of Uncertainty
Web Usability on a Budget
OAuth2 – The good, the bad and the ugly
Certifying your car with Erlang
Advanced HTTP Caching and patterns for Ninja Unicorns
Do it right, then do the right thing
Data is everywhere. Also in your Windows 8 app
Tekpub’s Full Throttle! Live on Stage with Jon Skeet
Kevlin Henney
Tim G. Thomas
Dominick Baier
John Hughes
Sebastien Lambla
Allan Kelly
Gill Cleeren
Jon Skeet & Rob Conery
16:00 - 16:20
Break
16:20 - 17:20
DDD / CQRS / ES – Misconceptions and Anti-patterns
A Developer’s Guide to Design Frameworks (and More!)
Securing a modern JavaScript based web app
What Every Hipster Should Know About Functional Programming
Game on: Developing HTML5 games
Growing software from examples
Applied MVVM in Windows 8 apps: not your typical MVVM session!
TBA
Ashic Mahtab
Tim G. Thomas
Erlend Oftedal
Bodil Stokke
Anders Norås
Seb Rose
Gill Cleeren
Lightning Talks
17:20 - 17:40
Break
17:40 - 18:40
TBA
SQL Server’s Last Breath
Architecting PhoneGap Applications
Rigorous, Professional JavaScript
ClojureScript: Lisp’s Revenge
How simple maths and BELIEF can help you coach people to change
Leave the Cage Match Refactoring backed to us: with Rob Conery Noda Time (part building mobile 2) apps with Azure Mobile Services
Lightning Talks
Rob Sullivan
Christophe Coenraets
James Shore
David Nolen
Geoff Watts
Yavor Georgiev
Rob Conery
Jon Skeet
93
PROGRAM – Friday Agile Mobile
TIMESLOTS 09:00 - 10:00
Cloud Programming Languages
Room 1
Room 2
Architecture Tools
Room 3
Room 4
Devops Design/UX
Room 5
Room 6
Miscellaneous Web
Room 7
Microsoft Testing
Room 8
NuGet for the Enterprise
HTML5 JavaScript APIs
Introduction to Clojure
Rigging Plan B: How To Go Live Safely With Bubbles, Domain Toggles And Obsessive Monitoring
Sharing C# across Windows, Android and iOS using MvvmCross
TDD, where did it all go wrong
Mining your Doing SPA with Creativity Mind MVC & KnockoutJS
Alex Papadimoulis
Christian Wenz
Stuart Halloway
Robert Reppel
Stuart Lodge
Ian Cooper
Andy Hunt
Miguel Castro
10:00 - 10:20
Break
10:20 - 11:20
Code-First NoSQL with .NET and Couchbase
TBA
The Curious Clojureist
Grid Computing with 256 Windows Azure Worker Roles & Kinect
Real Cross-platform Mobile Applications - The Anti-Pattern
Value of unit test: way from good design to easy maintenance
Effective GitHubbing: The GitHub Flow
Bleeding edge ASP.NET: See what is new and next for MVC, Web API, SignalR and more…
John Zablocki
Lightning Talks
Neal Ford
Alan Smith
Chris Hardy
Katya Mustafina
Paul Betts
Jon Galloway
11:20 - 11:40
Break
11:40 - 12:40
Big Object Graphs Up Front
The Javascript Generic Inferno - A Programming Decent Into the Galore using D Client-side MVC Netherworld
Continuously Deliver with ConDep
iOS with C# using Xamarin
Simulation Testing
Debugging your Test Driving mind Nancy
Mark Seemann
Rob Conery
Andrei Alexandrescu
Jon Arild Tørresdal
Craig Dunn
Stuart Halloway
Andy Hunt
Christian Horsdal
Workshop
12:40 - 13:40
Lunch
13:40 - 14:40
Writing Usable APIs in Practice
Building an Application, Live, with AngularJS
Object Orientation – The Forgotten Bits
Continuous Android with Delivery Zen on C# using Windows Azure Xamarin
Race Conditions, Distribution, Interactions-Testing the Hard Stuff and Staying Sane
Building Startups and Minimum Viable Products using Lean Startup techniques
Why Document Databases supercharge your app development
Refactoring Noda Time (part 3)
Giovanni Asproni
Rob Conery
Ashic Mahtab
Magnus Mårtensson
Craig Dunn
John Hughes
Ben Hall
Christian Amor Kvalheim
Jon Skeet
The Hip Hop Virtual Machine
The rise and fall of empires: Lessons for language designers and programmers
Web diagnostics with a Glimpse in ASP.NET
Running with Ravens
Outside-in testing with ASP.NET MVC
Uncomfortable with Agility: What has Ten+ Years got us?
Under the covers with ASP.NET SignalR
Venkat Subramaniam
Anthony van der Hoorn
Per-Frode Pedersen
Rob Ashton
Andy Hunt
David Fowler & Damian Edwards
14:40 - 15:00
Break
15:00 - 16:00
Service oriented architectures (hardcore separation of concerns)
August Lilleaas Andrei Alexandrescu
94
Database Security
16:00 - 16:20
Break
16:20 - 17:20
Deep C++
Bare-Knuckle Web Development
C# 5
Running OSS Projects: From Zero to Sixty
Functional Programming Panel
Holistic testing
Don’t do that, do this! Recommendations from the ASP.NET team
Olve Maudal
Johannes Brodwall
Jon Skeet
Nik Molnar
Carl Franklin & Richard Campbell
Jimmy Bogard
Damian Edwards
JOBB I BEKK ? Vi tror at interesser og lidenskap er viktig – både på og utenfor jobb. Nysgjerrige mennesker oppnår rett og slett mer. I BEKK får du mulighet til å utvikle deg videre, både som menneske og fagperson.
Sjekk ut ulike jobbmuligheter og hele vår faglige bredde på våre nettsider. www.bekk.no
95
gs Warnin
rd
tion onfirma
Voting
Passwo
Passwo
C
Contact LOVE e uppdat SMS
HLR
ers
HTTP API
Remind
ing
Market
Loyalty
ing
g Verifyin
n
Donatio
Survey
SMS Billing
Position
Mobile ng marketi Returadresse: ProgramUtvikling AS. Postboks 1, 1330 Fornebu Club
Merge SMS
Voting
Q
g
Orderin
The fast way to integrate
SMS into your software
Booking
n
Donatio
ion
Voting
sales@vianett.no l
g Verifyin
SMS Billing
HLR
inders For more information: m e R +47 69 20 69 20 www.vianett.com rketing a M d r o w Pass
Rabekkgata 9, Moss - Norway
at Confirm
develo
Survey
ing
rd Passwo
n
ww.via
unt: w per acco
t
Paymen
Position
Sign up
a free ow for
loper
/deve nett.com
MMS
Loyalty
Voting
M ma Club
Merge SMS
We love SMS!