CLOUD COMPUTING
 

THE CLOUD

By colleen

For most organizations including Enterprises, it is not a matter of what but when will they move to the cloud.

 

dealing with large scale failures takes a qualityatiely different approach
set of design principles



AWS
Amazon Simple Storage Service (S3)
Scoping the failure

core tenants
no long term contracts
available on demand
elastic

Intro to AWS
ec2
ebs
vpc
s3
sqs
simpledb
cdn
emr
rds


CAP
consistency, available or perfomance but not all three

Capacity
82 million objects
100,000 request per second


Failures will occur in data center

  • expet drive to fail
  • extpect netwwork connection to fail
  • expect a single machine to go out
failure scenerios
must think of all failures and which ones are important
manifestations take on different forms
  • corruption of stored and transmitted data
  • losing one machine in fleet
  • losing an entire data center
  • losing entire data center and one machine in another datacenter
causes of failure
  • human error
network configuration
  • pulled cords
  • forgetting to expose lob to external traffic
  • dns black holes
  • sw bugs
  • acts of nature
  • flooding
  • heat waves
  • lightening (has happened 5 times to amazon and caused partial outage once

  • entropy
drive failures
rack wsithc makes half the hostws in rack unaccesable

  • beyond scale
some dimensions of scale are easyy to manage
- amount of free space in system
- prcise measurements of when you coud run out
- no ambiguity
- acquisition of components by mutltiple suppliers

some dimensions of sale are more diffult
- request rate
- ultimate manifestation: DDOS attack

Timely failure detection
propagation of failure must handle or avoid
- scaling bottlenecks of their own
- centrliaed failure of falure detenction units
- asymetric routes

S3 Gossip approach for faulre detection
gossip, or epidment protocol, are useful tools when problistic consistency can be used

basic idea
- appls, components heartbeat their existence

not easy data changes at different rates, and network overlay
can't exchange all gossip state
network overlay must be takeninto consideration
doesn't handle the bootstrap case
doesn't address the issue of application lifecycle
not all state transcations in lifecycle should be performed automatically. for some human intervnetation may be required.

DESIGN PRINCIPLES (to help system be resilent)
- service relationships should be tolerant
- decouping functionality into multiples services has standard set of advantages
need to protect yourself fom upstream service dependencies when they haze you
(permissions to lease can call a certain number of times)
protect yourself from downstream service dependencies when they fail

- code for large failures
- some systems you suppress entirely
examples: replication of entities (data)
- some systems must choose different behaviors based on the unit failure
- anticipate data corruption (end-to-end check includes the customer)

- code for elasticity
the dimensions of elasticy
- need infinite elasticity for cloud storage
- quick elasticity for recovery from large-scale failures
introducing new capacity to a fleet
- idelaly you can introduce more resources in the sytems and capabilities increase
- all load balancing systems (hw and sw)

- mointor, extralpolate, and react
- modeling (to determine choke points that need to be monitored)
- alarming
- reactng
- feedback loops (take what you see and bring back to model where you should spend time to be durability in real-time)
- keeping ahead of failures

Code for frequent single machine failures
- most common fialure manifesttion - a single box
for persistent use quorum
- advatnage:
does not requir all ops to sucess
-hises underlying failure
hides porr latency
- disad
increase aggreate load on system for omse ops
more complex
difficult to scale

- all ops have a "setzie"

Game Days
network eng and dc tchnicions turn off a data center
- don't tell service owners
- accept the risk, it is going ot happen anyway
- build up to it to start


real faiure experiences
  • large outage last year
  • traced down to single network card
  • once found problem easy to reproduce
  • corrput leake past TCP checksum

 

By colleen

open source always first option to avoid vendor lock-in

Betting Engine
rabbitMQ
message broker
- free open source
- highly available, highly scalable (erlang)

Kaazing
open souce and opn standrs proocal (HTML5 websockets)
used for connection off loading


Protocol Buffers - Google
flexible efficient, automated mechanism for serializing (binary)
schema support (.proto files)
language neutral



java/spring <---rabbitMQ -----rabbite kasssing --- as3
betting engine fan out node live betting client

 

The Stack
Environment
java, groovy, scala (newest 2%), ruby, C++


Contianers
tomcat, jetty

Data Layer
oracle, MySQL, voldemart, lucene, memcache

Offline Processing
hardoop and splunk


DATA COLLECTION
bulk of challenge
majority data of collectors comes from data store exstensive memcache
requiement is speed

option1 : push arch (inbox (stil used but old))
each mem has an inbox fo noticatios receid from there connections/followees
N writes per update (where N may be very large)
Very fast to read
diifcult to scale, bu use for private or target systems

opt 2: Pull Architecture (new arch)
each member has an "ativty space" that cotains teir caciotns on linkedin
1 write per update (no broadcast)
require up to N reds to collect N straems
can optmize to nimize the number of reads?
not all N members jae update ot satisfy the query
not all updates can/'need to be displayed on the screen
some updates/members are mor impt than others

Queuing
activeMQ

frameworks
spring

Capacity
35M/week updates
14M/week emails


Storage Model
L1: temporal
oracle
compbined clob/varchar storage
optimistic locking
1 fdd to update, 1 write (merge) to update
Size boun by # number of pdate sna retention poicy

L2: Tenured
access less frequency
simple key-value storage is sufficient (each update has unique id)
oracle today transitoning to voldmort



Member filter
need to void fething N feed (too expensive)
filter ontains an in-memory summar o usr activty
filter only returns false-o;never false-neg
esy to measre heuristic; forn n member s how many had good contant


Commenting
users can create discussions around updtes
leverage existing forum serice
denormalize a discussion summary onto the tuenred update, resole first/last comments on retrieval
full discussion can be rtieved dynamically

Twitter Sync
parnership with twitter
vi-directional flow of status updates
export status updates, import tweets
users register their twitter account
authorize via OAuth

email delivery
multiple concurrent mail generating tasks
each tasks hos non overlappting id range generators to void overlap and allow parallelixaiton
controlled by task scheduer
sets delivery time
conrols task execution status , suspend/resume, etc.
cache common content to Notifierm which packgase the email
user priority jsp framework

david heinke now part of linkedin head of engineering/operations came from yahoo

 

How do we get this to model Collobaration & Change Management?

bottom up

  • scripts & recipes (handgrown automations)
  • runbooks (worlflow),policy
  • Frameworks (chef or puppet, cfengine)
  • build dependency (maven)

top down
  • modeled viewpoints (eg. ms oslo, uml, enterprise arch)
  • modular containers (e.g. osgi, spring, azure roles)
  • configuration models (sml, cim, ecml, edml)

problem "all modeling is pogramming and all programming is debugging" - Neil Gunther

Need visiability into what the model impleis
current solutons done seem satisfactory
  • code gereneration?
  • plan generation?
  • runtime adjustments?

Accountin barries to agie/lean operations
- cost atttribution
cap vs. opex
(it is configuration/debugging)
- should look at entire value stream from dev --> prod

trend --> costing based on time calculations for repeatable activtivites: time-driven activity based costing


charctize integrated approach to integre cloud app design ops
model -driven
make docs onfot to logia framework

goal and policy driven

collaborative

governable


elastic modeling languages

 

erlang
scalar

meme cache
db couch
reddis
hardoop

varnish (http cache)
squid (old http cache)
rabbitmq (message bus...for broadcast and hub/sub)
casandra