Monday 10 October 2016

Feature vs User Stories: A Connected Approach

If you have been working on software development using agile methods, then you must have come across the conflicting terminology that different teams seem to use, particularly around how they capture the requirements around a digital platform or product. and user stories. This conflict is epitomised in this discussion thread on this popular technical forum . As one can see from this thread, Features and User stories are bandied about as synonyms, sub-sets or super sets, chunks of functionality. 

In the following section we describe our product definition models and how various product discovery and conceptual design outputs, flow and connect with each other. These elements have been incorporated into Ness's Connected  methodology. Connected is an iterative, customer-centric approach and set of processes that brings together feature discovery, user experience design and software development in a harmonious union. Our interpretation is certainly not the definitive word on this topic but has proven successful in dozens of projects helping various stakeholders i.e.  business users, analysts and developers agree on common vocabulary and connecting the worlds of technology and business. The vocabulary used in Ness Connected is described below:
  • Personas: archetypes of users, their objectives and paint points
  • Features: this are important capabilities exposed by the application e.g. Contract dashboard
  • User Journey: the workflow and touch points of the personas with each of the features
  • User Story: are interactions of the personas at the journey touch points with the features exposed by the application. We can attach specific acceptance criteria to each story that will help validation of the feature in user acceptance testing.


These key elements are typically linked together in an experience workflow (a skeleton view is shown below) . Here we can see that the Persona (in this case a warehouse operator) can interact with the same system capability in a number of different ways to achieve different objectives. These are captured as user (persona) stories. Persona pain points are also stories and represent specific user needs requiring attention when designing and validating the system feature. Also, each persona may interact with the same feature in a slightly different manner and these will be captured as distinct user stories.







These definition of connected deliverables have proved extremely useful to delivering the promise of the Ness Connected method – ‘Design the Right Product’ and then ‘Build the Product Right’.  We would be interested in you sharing your own your experience and interpretation of features vs user stories and the flow that you use to connect various deliverables in product discovery and design.



Monday 3 October 2016

Basic Guide to the AWS Solution Architect Associate exam!

After much procrastination, I finally signed up and passed the SA associate exam, giving myself exactly 2 weeks to get ready. I must admit that I took the exam with much trepidation as I felt that I had not given myself enough time to prepare. Although, the exam turned out to be less tough than I expected, it is certainly not a cake walk. In the section below, I have tried to include some pointers that may aid you in smashing the exam:

Download the exam guide

  1. Read all the white papers especially the Security and S3 papers.
  2. Read all the faqs.
  3. Sign-up for an AWS free-tier account to get practical experience on all the AWS services. As you read each topic in the white papers, try to get hands-on the following areas in particular:
    1. setting up instances in a public and/or private subnet, route table, setting up a NAT instance to create a route to the internet for the private subnet, security groups, ACLs
    2. IAM: setting up users, group, roles, policies, MFA, Access key ID, secret access keys and learn about best practice to setup access to your AWS infrastructure.
    3. Setup an ELB which will point to your EC2 instance and then create a route53 zone which will point to your ELB.
  4. Specific topics that you can focus on:
    1. learn about route53 and different record types. Dabble with Route53 and load balancers, if you can although it may cost you a few pennies.
    2. learn about when and how to setup bastion hosts
    3. differences between SG and ACLs, allowing/disallowing SSH and RDP traffic 
    4. amazon directory service and how to use it with your current security infrstructure.
    5. launch configurations for auto-scaling groups
    6. ELB: cross-zone load balancing, can you load balance across regions?
    7. EBS and S3 encryption options
    8. know what services in which cases (DynamoDB, Elasticache, EFS, RDS) you can use to store application state, session stickiness
    9. IAM: learn how to setup users and roles, access keys 
    10. Billing: I think you can ignore the detailed billing examples from the FAQs; these may be more relevant to the professional exam. For the associate, I think it is mostly important to know when you are charged for Elastic IP, sport instance.
    11. Know the USP of each AWS service and key differentiator for between similar services e.g. DynamoDB vs RDS vs S3  
    12. Understand the zone and region limitation of services e.g. is it possible to attach EBS volume from another region (You can attach an EBS volumes to one of your instances that is in the same Availability Zone as the volume)
    13. Snapshotting of EBS
    14. EBS backed vs instance backed volumes, performance characteristics of EBS volume types.
  5. Do take a practice exam if only to get familiar with how to time yourself.
  6. Finally, there is no substitute to setting deploying your local web/database stack on EC2. It will clear-out any questions that you may have and also expose you to the flexibility/limitations of the platform.


 Also, there is a lot of well-meaning guidance out ther, especially answers to practice exams questions and you should definitely explore them. However, take care that any answers that not all answers may be correctly marked and you should form your own opinion about each question.

Good Luck!!! and if you do have any questions please feel free to contact me at wilbur.desouza@ness.com.

Wednesday 6 July 2016

Reactive Programming Part 2: Building a reactive auction site

In the previous post, we were introduced to the basic concepts of reactive programming. In this part, we will cover prototyping a real time auction application using some of the reactive programming concepts and the toolset listed above. The application will support:
  • Setup an auction with basic attributes such as title, initial price, and the product id
  • The live auctions be viewed and an user can bid for the item if the user is logged in.
but will not support: client side validation on forms, reporting highest bid, security rules.

We are going to use Google Firebase (https://firebase.google.com)  as our backend.  The Firebase Real-time Database is a cloud-hosted database where Data is stored as JSON and synchronised in real-time to every connected client. Although the technology is still evolving, it is a fantastic tool for prototyping reactive behaviour and building highly collaborative event driven applications.

The data model:

First let us define a model for our auction data. It will be a JSON list of auction objects. However, instead of nesting bids within the auction object, we will have a separate node of bids with child nodes as auction object. This relatively flat structure makes it easy and fast to search and index objects later on.

{   
  "auctions" : {   
   "auctionkey1" : {   
   "prodId" : "2",   
   "startingBid" : 1,   
   "title" : "auction 1"   
   },   
   "auctionkey2" : {   
   "prodId" : "2",   
   "startingBid" : 1,   
   "title" : "auction 2"   
   }  
  },   
  "bids" : {   
   "auctionkey1" : {   
   "bidkey1" : {   
    "price" : "10.01",   
    "userid" : "user1"   
   },   
   "bidkey2" : {   
    "price" : "10.02",   
    "userid" : "user2"   
   }  
   }   
  }   
}   

Landing Page and Navigation

Next we setup the landing page (app.component.ts) which is a simple nav bar with a content placeholder in which the single page application will load content based on the route configuration in app.routes.ts. The template for the landing page is shown below with navigations that are defined in app.routes.ts.


app.component.html
The template for the landing page page is shown below. It uses navigation routes that are defined in app.routes.ts.


app.routes.ts
The default route will load the AuctionsComponent that will display the list of auctions. Actions are setup in the AuctionsFormComponent.


app.component.ts
For authentication, we will use firebasese authentication provider that supports various social network authentications in addition to email/password authentication. For the purpose of this project, I have implemented only Google authentication. Also, the firebase provides a defaultFirebaseConfig object that can be bootstrapped into the AppComponent via main.ts.

Setting up a new auction


auction-form.component.html
The auction form template is setup as a model driven form template.

auction-form.component.ts
The form's DOM elements are bound to the view model via form controls. Controls can be seen as proxy objects to the DOM elements. A Control can be bound to an input element, and takes 3 arguments (all optional); a default value, a validator and a asynchronous validator. Controls can be grouped together within a controlgroup. The state of the form is available  through the control objects within the control group but in this case we are also using the [(ngModel)] binding to bind the form to a model object in AuctionFormComponent. This is not advisable but as the duality of state access may cause some confusion but some may prefer the less verbose approach of the ngModel.



auction.service.ts
The AuctionFormComponent uses a service component to create an auction node in the firebase server. The angularfire reference object gets injected into the service component by angular within the constructor of the service. The getProducts() call returns a dummy list of product. Alternatively, instead of createAuction, you could bind to an observable array representing the auctions node and push directly new auction data onto the Observable array using the 3-way binding of the Angular Fire2 framework. This concept is used in the auction listing and bidding user story as we will see in the next session.

Listing auction items and bidding

Before we get into the implementation of auction lists, we should familiarize ourselves with a couple of key concepts related to using firebase as a real-time datastore.

3-way binding

2-way binding (model to view synchronisation) is a well-known concept.  However, angularfire offers 3-way synchronisation which is the view to model to database synchronisation. Push changes on to the synchronised object and now any changes in the DOM are pushed to Angular, and then automatically to our database. And inversely, any changes on the server get pushed into Angular and straight to the DOM.

Concurrent writes

If a user adds a new bid it could be stored as /bids/auction/2. This would work if only a single user were adding auctions or bids, but in our real time auctions application many users may bid at the same time. If two bid simultaneously, then one of the bids would be deleted by the other. To handle this, Firebase provides a push() function that generates a unique key every time a new child is added. By using unique keys names for each new element in the list, several clients can add children to the same location at the same time without worrying about write conflicts. The unique ID generated by push is based on a timestamp, so list items will automatically be ordered chronologically.



We can see these concepts in play in the auctions listing and bidding component.

auctions.component.html
The auction listing template has ngFor loop that iterates over the auctions observables array. The async pipe unwraps the each item in the auctions observable as they arrive. The product images are stock images picked from the lorempixel images site indexed to the product id within the auction object. Any selected auction is highlighted and the selectedAuction property is set to the currently selected item in the list and you can then bid on the item.



auction.component.ts
The AuctionsComponent maintains a list of auctions as an Observable of auction items which get inistialised through the auction service. You can then select the item  and make bids using pushes onto auctions model object. The implementation of the getAuctions() method is interesting - it returns an observable that mimics the JSON data structure we saw earlier. When we select an auction and place bids, we can access the bids arrays using selectedAuction.bids and push new bids onto the observable array of bids with the selected auction as the key. In an implementation where the bids are in a separate component, you would have a getBids(key) method in the the service to access the bids node directly and then push bid objects using the same auction key. We can see the auction selection and bidding actions in the animated gif below.



Selecting and bidding

Once you have selected an item and logged in, we can bid for the item. This is implemented as a template driven form. Submitting the form, pushes a JSON object onto the bids node using the selected auction as a key.




What's missing

Besides the obvious lack of form validations and a more elaborate auction data model, the  application is missing some functionality such as ensuring that bids can be placed only if they are higher than the current highest bid. This could be implemented on the client side but any robust implementation would ensure that the server maintains ultimate control of who is the highest bidder. Currently,Firebase has a limitation of server side rules (although basic validation on access and data updates in the security rules console). There is a workaround to implement a server component (such as a nodejs client) that can listen to event updates and make post-facto updates to the firebase state e.g. set the highest bidder. This pattern is illustrated below:


ref: https://www-staging.firebase.com/resources/images/blog/client_server.png

Conclusion:

This is a first attempt at putting recent angular2 learning into practice but hopefully, this can provided you with enough motivation to embrace responsive programming and that you are as excited by the power of the reactive paradigm as I am. There will be more updates as I try to keep pace with angular and firebase developments, and learn more about reactive application architectural patterns within our projects.

Tuesday 5 July 2016

Reactive Programming Part 1: An Introduction

Reactive programming is a hot topic and something that I have been following with interest. Essentially, it involves a programming paradigm that reacts to stimulus which can be modelled as an event or data stream. It isn’t really a new paradigm, as spreadsheets have been using this for a long time. When you create a chart based on the value of certain cells, as you change the input in the cells, the chart ‘reacts’ to the input stream and modifies itself. The reactive programming model permeates right through all application layers from backend services (data changes), front-end applications ( user stimulus such as keyboard events)  and interactions with external services (twitter streams) . It also implies a shift from imperative style of programming to asynchronous, functional-style code (no callback hell).  This results in high responsive user experiences and communication patterns that enable a decoupled micro-services architecture.


What is it good for?

There are numerous applications where reactive principles can bring immense benefits and simplify the programming and architectural models. Some of the more obvious ones are:

  • User experience related features: search, sign up process (checking if the user already exists)
  • Social or networking style apps: collaborative document authoring, group calendars, messaging/chat apps 
  • Multiplayer games
  • Financial event based systems
  • Robotics
  • Sensor networks and IoT

Key Constructs: Streams and Observables

Modern reactive programming framework such as RxJS (and other language flavours) offer efficient semantics and constructs for writing non-blocking services and applications. The core concepts of reactive programming models are streams, operators and subscriptions (bindings).

Streams:

As mentioned before, streams are sequences of events ordered in time. Stream events can be almost anything -  a mouse clicked, a page view, an error event, an Instagram post, a person of interest entering a watch zone.  



Transformation:

Streams can be transformed, combined to create new immutable streams that can then be subscribed or listened to by consumers. This is the old observer pattern from the GOG book but there are some key distinctions which I will cover later.



Also, event streams can be generated from continuos sources as as illustrated below:

Subscription/Binding:

Streams are ‘observed’ or ‘listened’ to by means of a special implementation of the Observer patter called as an Observable. The Observable is similar to the Iterable but unlike the Iterable where the consumer pulls the data from the Iterable, the observable pushes data to the consumer when data is available, signals to the consumer when no more data is available and signals when an error has occurred.

The Reactive Toolset

While there is a plethora of choices available to programmers for building reactive applications, these are my leanings:
  • RxJS with bindings for several languages
  • Akka Streams,
  • Angular2, AngularFire, Firebase

In the next part of this article, I will cover prototyping a real time auction application using some of the reactive programming concepts and the toolset listed above.

Monday 4 April 2016

F#: Readability vs Conciseness

Beautiful code is elegant and simple; however, to achieve this goal programmers need to achieve the fine balance between readability and conciseness. Generally, I have hated the verbosity of languages like C#, especially when compared to the simplicity of octave or python. I have been dabbling with the use of F# in data analysis, simple dsl for rules engines and data processing utilities. My experience thus far has been a very positive one and have been surprised by how compact  and readable it is, even if I havent looked at it for several months. I illustrate this opinion with a simple implementation of gradient descent used to optimise the cost function in regression learning. Let's start with a basic primer on linear regression:

Linear regression involves formulating an hypothesis about the relationship between variables of interest e.g. years of educations vs potential earnings. The multiple variable hypothesis function can be written as:

h(θ)=θ0+θ1x1+θ2x2+..+θnxn
where θ0 is the zero bias, θ1..θn are the weights, x1..xn are the input variables

The weights are estimated by minimising Sum of Squared Error function given by:
C(θ0,θ1,...θn)=12mi=1m(hθ(xi)yi)2
where m is the number of observations and y is the output vector for the training set.

The cost function can be minimised using the gradient(hill) descent method which involves a simultaneous update of  θj = θj - learning rate (α) times the partial derivative of C(θ) with respect to θj for j=0…n

The above equation simplifies to: 
θ0=1mi=1m(hθ(xi)yi)
for j=0
θj=1mi=1m(hθ(xi)yi)xi
for j=1..n 

Now lets look at the f# version of the hypothesis and gradient descent  function. We are going to use a vectorised version of the above code to avoid have to deal with loops and subscripts. The code below uses the  Mathnet.Numerics package for matrix and vector operations.

let h (X:Matrix<float>) (θ: Vector<float>) = X*θ 
let gradientdescent (X: Matrix<float>) (y:Vector<float>) (θ: Vector<float>) (α: float)  =
    θ - (α / float y.Count)*(((h X θ) - y)*X )

Lets test it out with an example:

//Slice the training data set A into X input features and y output features and append a unit vector to account for the zero bias 
let A = matrix [[ 1.;0.5]  [ 2.;1.]  [ 3.;1.5]]
let X = DenseMatrix.Create(A.RowCount, 1,1.0).Append(A.[0..,0..0])  
let y = A.[0..,1]
let mutable θ = vector [0. ;0.]
let α = 0.1

// descend for 1500 iterations
for i in 1..1500 do
   θ < - gradientdescent X y θ α  

Also, the gradient descent function works for logistic regression by simply using a sigmoid hypothesis function.

let sigmoid (x:Matrix<float>) (θ: Vector<float>) = 1.0/(1.0 + (X*θ).Negate().PointwiseExp())
let h (X:Matrix<float>) (θ: Vector<float>) = sigmoid X θ

Except some type pollution and ignoring the hardcoded test code, the code is pretty self-self-explanotory and demonstrates a good balance between readability and conciseness.The above code is a rather naïve implementation of linear regression. You can extend this by introducing functions to load data sets, slice into input and output features, feature scaling, alpha cooling and ending the descent on convergence to some epsilon.