Fundamentals of Software Performance Analysis Part III — Optimizing Memory Allocation Performance

This is the third post in a three-post series covering the fundamentals of software performance analysis. You can find the introduction here. You can find Part II here. You can find the companion GitHub repository here.

Part II covered the process of using a profiler (VisualVM) to identify “hot spots” where programs spend time and then optimizing program code to improve program wall clock performance.

This post will cover the process of using a profiler to identify memory allocation hot spots and then optimizing program code to improve memory allocation performance. It might be useful to refer back to Part II if you need a refresher on how to use VisualVM or the optimization workflow.

Continue reading “Fundamentals of Software Performance Analysis Part III — Optimizing Memory Allocation Performance”

Fundamentals of Software Optimization Part II — Optimizing Wall Clock Performance

This is the second post in a three-post series covering the fundamentals of software optimization. You can find the introduction here. You can find Part I here. You can find Part III here. You can find the companion GitHub repository here.

Part I covered the development of the benchmark, which is the “meter stick” for measuring performance, and established baseline performance using the benchmark.

This post will cover the high-level optimization process, including how to use profile software performance using VisualVM to identify “hot spots” in the code, make code changes to improve hot spot performance, and evaluate performance changes using a benchmark.

Continue reading “Fundamentals of Software Optimization Part II — Optimizing Wall Clock Performance”

Fundamentals of Software Optimization Part I — Benchmarking

This is the first post in a three-post series covering the fundamentals of software optimization. You can find the introduction here. You can find Part II here. You can find the companion GitHub repository here.

The introduction motivated why software optimization is a problem that matters, reflected on the fundamental connection between the scientific method and software performance analysis, and documented the (informal) optimization goal for this series: to optimize the production workflow’s wall clock performance and memory usage performance “a lot.”

This post will cover the theory and practice of designing, building, and running a benchmark to measure program performance using JMH and establishing the benchmark’s baseline performance measurements.

Continue reading “Fundamentals of Software Optimization Part I — Benchmarking”

Introduction to Fundamentals of Software Optimization

This is the introduction to a three-post series covering the fundamentals of software optimization. You can find Part I here. You can find the companion GitHub repository here.

Performance is a major topic in software engineering. A quick Google search for “performance” in GitHub issues comes up with about a million results. Everyone wants software to go fast – especially the users!

Gotta Go Fast!

However, as a general problem, software optimization isn’t easy or intuitive. It turns out that software performance follows the Pareto principle — 90% of time is spent in 10% of code. It also turns out that in a large program, people — even professional software performance analysts who have spent their careers optimizing software — are really bad at guessing which 10% that is. So folks who try to make code go faster by guessing where the code is spending its time are actually much more likely to make things worse than better.

Fortunately, excellent tools exist to help people improve software performance these days, and many of them are free. This three-part series will explore these tools and how to apply them to optimize software performance quickly and reliably.

Continue reading “Introduction to Fundamentals of Software Optimization”