This is called rules of the road but they aren't rules they're more guidelines, so they're rules until there is a good reason to ignore them.
Do the work on write/update wherever you can.
Flat file sites are FAST.
Caches are bad. If you must cache then let http do it for you. Then the data source retains control of its data and can make a load consistency trade off.
If you must use JS then it should only be for personalisation and client side composition.
Micro services move the complexity from compile time to deployment time. Are you smarter than a compiler? There are probably ways around this but they involve building a compiler for hardware.
Where is the state? If you’re doing anything of use there must be persistence somewhere. Understand the scope of the persistence. It’s actually what defines the boundaries of the service.
Queue’s count as state. (In a way persistence is queues, or at least order)
Make communication explicit. If two systems communicate make sure it is understood and not implicit through a database or some other storage medium.
Communication should be decoupled wherever possible, contracts.
If things are very closely coupled or even really chatty then they should probably be in the same process and any complexities controlled by a compiler.
Bandwidth Delay/Latency Product: You can have all the bandwidth in the world but latency will kill you most of the time. See notes.
Pick any two. Its more complicated than that though. (CAP is uncool now)
Don’t do your own cryptography, ever.
Careful with the keys.
Security is very very hard don’t reinvent the wheel.
Consider the difference between Authentication and Authorisation.
If you need an audit trail that isn’t trivial then consider an event store. If that’s too complex then consider a command model, every request is a command that gets logged as run or failed. If you've built a good API you might get this for free in the logs. Depends on the purpose of the audit trail.
Do you have non functional requirements? Trick question you always do. Can you test that they are being met?
What are the dimensions: Request, Storage, Response Time, Transactions etc. Lots of users isn’t a dimension.
If they doubled what would happen, what about an order of magnitude. You don't have to build this, jsut be able to answer the question.
Always prototype with real infrastructure.
Start with errors and work up from there.
With something that looks like REST you can get a very useful log just from the HTTP log. Consider the fact that if it’s not useful you might not be as RESTful as you think.
Aggregate information somewhere and start graphing and alerting.
Extra points for doing clever correlation stuff with the messages.
Automating the response to the above is talked about a great deal. If someone at some point does something other than send a text message it will make me very happy.
Management of complexity is everything. Really it is.
Consider 6 dice as a system, 6^6 combinations, untestable. 1 die only has six states, much easier to test. This rule applies at every level of abstraction.
Every problem can be soloved with more layers of abstraction - except too many layers.
Side effects will kill you. Every. Damn. Time.
Some of these numbers will change over time because of performace improvements - some of these won't because of the speed of light. Knowing the difference is fairly important.
Credit By Jeff Dean:http://research.google.com/people/jeff/ Originally by Peter Norvig: http://norvig.com/21-days.html#answers https://gist.github.com/hellerbarde/2843375