This a late post and is part 1 of 2 parts are there is quite a bit of content to digest.
Background Story: One of my goals last year was to certified in Heroku. I studied my butt off for 3-4 weeks to prepare for this exam. I have web development experience but haven’t deployed applications to Heroku until just recently. With Salesforce offering this certification mid last year 2019, I decided to take it before the end of the year (December 2019).
Here are my key takeaways from the exam and materials to focus on. (Please do not ask for exam dumps! If you wanna be really good, put in the effort and study please!)
- First, get familiar with the Heroku Architecture Designer Guide – https://trailhead.salesforce.com/help?article=Salesforce-Certified-Heroku-Architecture-Designer-Exam-Guide
- The Salesforce Trailmix is mostly linking to the Heroku Dev Center, I didn’t refer to them that much, but I did as many Heroku related trailhead modules and got those badges. https://trailhead.salesforce.com/users/strailhead/trailmixes/prepare-for-your-heroku-architecture-credential
Where does Heroku fall in the stack of cloud computing? It is Platform as a Service
If you don’t have Heroku imagine what you would traditionally go through from coming up with an idea and building the app.
Heroku removes most of those decision making for you and abstracts them so you focus on just developing the app and removes the worry about the infrastructure that would support your app.
- Try the different the Getting Started tutorials on Heroku on different languages (I tried PHP, NodeJS, Java, Python)
- Check what is a common pattern defined on each language and how they get deployed to Heroku (the dependency mechanism like requirements.txt for Python, pom.xml for Java, package.json for NodeJs, etc..)
- For the framework you are using, know what are commands to execute the application, and define them in the Procfile.
- Get in-depth with the Heroku Architecture
- git push heroku master – what does this command actually do on the Heroku
- what are buildpacks? – they are open-source sets of instructions/commands that take your application source code, dependencies and runtime to produce a slug.
- what is a slug? – produced by the buildpack, this contains all your compiled application code, dependencies and runtime ready to run on a dyno with the Procfile.
- what are config vars – these are environment specific configurations
- what are dynos? – it is a lightweight Linux container where it executes your slug. Dynos can be scaled horizontally by adding more dynos or scaled vertically by using bigger dynos.
- web – a process type that receives HTTP traffic
- worker – a process type typically used for background jobs, queueing systems and timed(cron) jobs
- one-off – a temporary dyno that is detached that can run a local terminal. Used for admin tasks, db migrations and console sessions.
- different dyno types
- free, hobby, standard and performance for common runtime
- limitations of each dyno type
- private dynos for private spaces
- get familiar with dyno manager, redundancy and security
- different dyno types
- what are stacks – an operating image stack(Ubuntu) maintained by Heroku
- how to add custom domains and subdomains
- limitations of A record with your DNS provider
- use of CNAME record
- HTTP routing – routes the incoming request to your app to running dynos
- HTTP Request ID headers
- Session Affinity – in short, requests routed to any running dyno should maintain state regardless
- Logging and Monitoring – logs are considered to streams
- git push heroku master – what does this command actually do on the Heroku
Heroku Add-ons
- how to provision add-ons
- from using dashboard or CLI
- share add-ons between apps
- what can you see in the elements marketspace
- add-ons, buttons, and buildpacks
Heroku Managed Add-ons(Data Management)
- Heroku Postgres – I focused too much on this as I was scared that there were too many materials about it, but on the exam, I realized that just covering the basics would have gone a long way by itself.
- get the basics covered (provisioning, the different plans, primary and follower database, forking, sharing add-ons between apps
- follower – database replication serves many purposes
- read throughput with leader-follower configuration
- hot standby
- reporting database
- seamless migration and upgrade
- forking – creates a snapshot of your current database (does not stay up to date like follower database)
- risk-free method to try production data and data for testing development or migration
- data clips – SQL queries that you share, can be accessed through a browser and downloaded as CSV or JSON (30 requests per minute limit per IP)
- can be public/draft or shared individual or to teams, you are a member of
- can be revoked
- can be integrated with Google Sheet (=IMPORTDATA(…))
- 100k rows returned
- how to troubleshoot performance issues
- use CLI commands like pg:diagnose or Diagnose tab (not available on Hobby plans)
- expensive queries – queries running slowly or takes a significant amount of execution time
- Run EXPLAIN ANALYZE (via pg:psql)
- identify used/unused indexes
- upgrade to the latest database
- how to do rollbacks from backups – can rollback DB to a certain point in time – similar to the release deployment – heroku releases:rollback
- follows the same pattern as follower and fork and does not affect the primary database
- how does it relate to the Heroku Connect add-on? – More on this on Heroku Enterprise topic for syncing Salesforce records
- Heroku Redis
- Heroku’s managed key-value store as a service
- create a Redis instance and attach it to an app
- Heroku Kafka
- I was not able to play with this add-on as it is paid, but get an in-depth understanding of how the architecture works and concepts is a must
- Kafka is a distributed commit log, fault-tolerant communication with producers and consumers using message based topics.
- Some use cases
- elastic queueing – can accept large volumes of events and consumers/downstream services can pull from these events when available.. this allows scaling and improves stability with fluctuations in volumes
- data pipelines and analytics – with Kafka immutable data streams – developers can better build a highly parallel data pipeline for ETL and aggregation of data
- microservice coordination –
- Kafka concepts to master
- Kafka is made of up from a cluster of brokers (instances running Kafka)- numbers of a cluster in a broker can be scaled to increase capacity, resilience, and parallelism
- brokers manage the stream of messages sent to Kafka
- producers are clients that write to brokers
- consumers are clients that read from the broker
- topics are made of a number of partitions
- allocate more partitions if your consumers are slow compared to your producers
For the second part I’ll be sharing the tips for following.
- Heroku Enterprise
- User Management
- Heroku Runtime
- Common Runtime
- Private Spaces
- Shield Private Spaces
- Dynos and Dyno Manager
- Deployment
- Etc.