
Preface
Financial planning software today is still a calculation-based approach that derives a supposed solution that most in the profession call a plan. Under the current paradigm, there is much more to be desired. Those desires are currently limited by the perspectives and interests within the software provider community that is also driven by the demands from the profession unaware of what may be, as compared to what is. This phenomenon is a form of group think. Advisers are unaware of what may be, so they can’t ask the software profession to develop and provide something neither group is aware of, or thinking of. (1) Neither group seeks out other information from other disciplines either, as explained below.
Maybe a bit critical, but perhaps some will see what may be possible through the lens of small paradigm shifts what come from the application of some key capabilities and insights from other professions. Capabilities and insights that would move the profession from simple calculations (what most now call simulations by applying Monte Carlo, or stochastic processes) to modeling the process of aging for retirement planning income purposes, both before and after retirement, and with a seamless transition between working years and retirement year (and within those retirement years as well) based on using the same modeling “imagines” described below. In other words, a unified model.
This topic is not a matter of opinion as some may believe. It is a matter of disciplined experimentation involving control values so results can be meaningfully compared, contrasted, evaluated, and duplicated. Experimentation with control variables to test and prove or disprove hypotheses comes from my early days earning a BS cum laude degree in Physics. Each of the below “imagines” are testable (most already have been tested and published), and all can be combined into one model that has yet to be developed. Experimentation is how science, based on evidence, works.
The current paradigm is a continuation of the past deterministic approach in thought and concept, only with a Monte Carlo overlay where each simulation is cast over a single period, most commonly 30 years. What about other time periods? What about other efficient allocations? How do all of these compare to each other and at what ages should they change? This single simulation, and single calculation, approaches creates an illusion of modeling. Each and every time current paradigm software runs it performs redundant calculations, that with a small paradigm shift, need only be performed once and captured as explained in the SIPMath segment below (SIPMath is a process that captures auditable data). (2) This present day redundant calculation approach is quite different from the approach with a complete adoption of Monte Carlo that is cast over rolling time periods whose lengths are all different and that incorporates all of the “imagines” below as well. Processing resources could be better used for modeling as imagined below, rather than to perform redundant calculations each time a simulation is run with differing inputs. Retirement is not a single set period of time as viewed from the current static paradigm. Retirement is actually a connected series of statistically determined time periods over an ever shortening period of time as viewed from a dynamic three dimensional paradigm as explained below.
Below is a discussion for a paradigm shift with sidebars and postscripts following for further thought.

“Imagines …”
Here are a few “imagines” of incorporating other sciences (illustration follows):
- Imagine software that would suggest what model allocation one should have based on your current age, and not only that, but model at what ages a retiree should consider changing that allocation as well. What allocation is optimal based on present age, relative to other efficient model allocations choices? This, as compared to inputting as is now currently done with an allocation that carries through the whole single simulation process – then “hunt and peck” contrasting alternative allocations, none of which adopt allocation changes for later ages.
- Imagine software that can compare your current allocation results (I’ll call “inefficient”), to model portfolio allocation results (presumably models that are optimized for efficiency) over your lifetime regardless of how old you may get, so you may be able to make decisions as a retiree of what allocation is optimal at each and every age looking forward. Efficient model portfolios only need to be simulated once and that output captured to form a “data cloud” from which those solutions are looked up and incorporated into the model. (3) It makes little sense to continually simulate something, using processing power, that can be done once with an auditable process that can be reused as often as needed, thus saving processing power for expanding modeling capability. All the statistical data sets may be updated and simulation solution sets captured annually. Only inefficient, non- model, allocations would need to have their solutions simulated and captured for inclusion in the modeling software, for comparison to results of the efficient models to make better decisions from model output.
- Imagine software that actually used longevity statistics based on each and every age one would live through, even to the end of the longevity table used; though unlikely most would live that long, it’s possible a small number might. However, dynamically adjusting longevity with age is a strategic use of longevity statistics as opposed to using rule of thumb ages. So this is an important capability for those who continue to live into older and older ages. This as compared to a rule of thumb of age 95 or 100, or assumed input of any such age. Why not have the software continue to determine what that end age should be simply through the application of statistical tables that already exist. Why guess or assume when we already know that the longevity tables change their statistical curve shape as one ages and the expected total time frame and duration continually shorten. So model that changing shape based on every age a retiree may reach through rolling adjustments within the modeling software itself. The model simulations continue to the end of the life table; however, one may choose to what age they wish to consider illustrating ending results. Later results may simply be revealed, without changing any earlier age answers; or vice versa. In other words, under the current paradigm, if you wish to consider a different ending age to simulate to, you change all the answers between current age and ending age too! Interim results shouldn’t change unless by design (imagine #8). Draw-down rates slowly increase with age, though at a slower rate than Required Minimum Distributions (RMDs). Draw-down rates can be specifically calculated for each and every age based on the strategic use of life table statistics.
- Imagine software that actually looked at how retirees, or “real people,” adjust their spending as they age (e.g., Bureau of Labor Statistics Consumer Expenditure Survey (BLS CEX)). That data has statistical curve shapes that also change over time (i.e., age). Rather than use average spending (The Flaw of Averages), instead incorporate those slow adjustments to spending by age directly into the model. Imagine if a retiree were concerned about medical expenses, then the metalogged data (see #5 too) that focuses on the medical expense percentiles by age to place more emphasis on those; or any other expense area(s) of interest or concern as well.
- Imagine software that uses new statistical processes called metalogs to incorporate how statistical data is actually shaped not only overall, but also via time slices so that data may be matched up to each and every age a retiree has and may reach. Statistical data for portfolio characteristics, longevity, and spending can all be better viewed statistically through metalog application to better visualize how all those data types change shape with age (time) and allocation characteristics. Metalogging is a process to fit the statistical curve to the data (skewness, kurtosis, fat tails, lobsided tails, more than one mode, etc.).
- Imagine software that automatically compares possible cash flows (serially connecting simulation percentiles) from better than median markets, median markets, and poorer than median markets (stress test of poor markets) where better than, expected, or poorer than cash flows, and fees, from those respective portfolio balances would be adjusted within the software age, by age, by age.
- Imagine software that compares different “what-if’s” scenarios on an apples to apples basis by weighting the sum of what-if scenario cashflows weighted each age by the probability of being alive at each age.
- Imagine inputting cash flow floors, or ceilings may be incorporated to see how those affect outcomes. [This paper won the CFP® Board Best Research Paper Award at the 2016 Academy of Financial Services annual conference]. “Lumpy” expenses that aren’t ongoing can also be inputted and modeled. Using assets to buy an income annuity versus retaining those assets may be easily compared. Incorporating outside assets from a reverse mortgage may be included or excluded for comparison of outcomes. Visual outcomes that retirees may easily see (see solutions graph below). Fama has said many times that models don’t predict the future results; they aid in decision making. Retirement income planning requires many considerations for decision making. Software should model how all of the above factors blend together age, by age, by age as a retiree ages … for the purpose of better decision making, not only now, but into the future as well to see how today’s decisions may affect and influence future desires. How do those decisions “look?” See example illustrations below.
- Imagine seamlessly incorporating pre-retirement accumulation that incorporates Social Security and/or pension income changes with possible claiming ages to show what actions most affect a retirement feasibility timeline showing all those choice alternatives at the same time, rather than running alternatives one by one by one. Imagine using Monte Carlo (stochastic modeling) throughout the model, from working age through the end of life tables, to seamlessly transition from accumulation to draw down, to bridge the gap between specific family standard of living needs by an age-based draw down rate to determine the total portfolio value needed, by age, to develop such a feasibility timeline; all using the same age-based principles using Monte Carlo processes throughout. In other words, at potential retirement age X, you have $Y dollars from Social Security (and/or possibly a pension), what is the gap of that subtotal to the total need? Divide by the draw down rate for age X (which is different for each and every age) to derive the portfolio value needed. Compare to age X+1, or X-1, etc. to develop a timeline to see how earlier, or later retirement, compares and what the additional savings needs may be to simply transition from accumulation at age X, to draw down at age X, using the draw down rate for age X derived from the Monte Carlo model imagined in this post. In other words, each and every calculation would be based on its’ own separate Monte Carlo simulation. The model would serially connect cash flows and balances between them. This approach uses a more specifically derived method to measure and balance consumption and saving needs, a form of consumption smoothing using more specific computations that target a range of possible transition years while also connecting specific individual standard of living today into a specific target individual standard of living through retirement. And as that standard of living adjusts with time while working, there’s a computational direct connection between smoothing between working and retirement years. This in contrast to using averaged (See Imagine #4 above: The Flaw of Averages) ratios or percentages of standard of living unrelated to specific situations.
- Imagine software that combines all of the above!
- Imagine un-imagined or unlisted possibilities beyond the above!
Discussion.
All of these possible modeling insights allow for rolling adjustments over time, providing insights to what those slow adjustments to length of portfolio drawdown adjustments may be, and modeling cash flows and portfolio balances with those adjustments from present age to any age in the future, even to the end of the table being used, ideally into ages 100+.
Such programming allows for viewing outcomes and plan from those possible elderly ages in reverse to retirement, and even applying the same concepts into any pre-retirement age modeling as well. In other words, modeling as imagined above reverses the planning process of going from today into the future, to looking from the future back to today, based on how today’s decisions influences the desired future outcomes. Today’s decisions ripple through the model by prudent management of the spending slope or glidepath.
All of these statistical data sets are available. All statistical sets may be metalogged. All the statistical sets may be incorporated into programming software that would actually model a lifetime of possibilities and compare those possibilities to other decision choices retirees may make from spending to saving. Modeling removes the term “probability of failure” and replaces it with “range of outcomes.” Software such as this would have a “data cloud” where alternatives may be compared and the optimal cash flow my be determined through a lookup function using statistical data for efficient allocations (inefficient allocations would also be calculated for comparison through the programming, though not before hand as would be possible with predetermined efficient model allocations updated annually).
You see “failure” is a term referring to the iteration failure rate (the profession adopted the term “probability of failure” expressed as a percentage) of the simulation set which is made up of 1,000 or more iterations with some percentage failing to extend beyond the length of time the simulation set is cast, say 30 years, but can be over any period of time, even just 1 year in length. However, simulations are separate calculations that go into a model. The iteration failure rate of each calculation is not the same as the model failing. Models should calculate the range of possible actions that keeps the model viable through small spending adjustments based on the reality of the portfolio values over time. Wide annual swings in portfolio values can be smoothed out through various strategies the model should also incorporate. Models and calculations are two different things as explained further below.
How does today’s decision affect outcomes may years in the future? The current paradigm views a single set of simulations as the “model” (one simulation) as graphed below. Output appears similar to that graphed below; an ever-widening range of potential outcomes. Where’s the “answer” to the problem of retirement income in such a graph? Most look to the range of outcomes on the right side, which are actually akin to “remainder values” of the simulation. The “answer” to the problem is actually on the left side of the graph (where few look)! The number of simulation iterations is how the “answer” is derived.

Software as imagined in the many “imagines” above may result in solutions graphed by age. Multiple solution graphing produces results similar to those below (almost 200 simulations graphed).* Every data point below is the result of its’ own stochastic simulation based on ever changing inputs (all the imagined points discussed above are inputs) from data most closely associated with each attained age in the graph (not tied to the age the simulation was started with, which results in graphing similar to that above).
Do you see the differences between the output graph above versus output graphed below?
The graph below seeks to bring in uncertainties from the science of many disciplines to illustrate the range of uncertainty when all “imagines” above are combined into a single model.
It is well known that changing the length of a simulation changes the results. Changing the allocation changes results too. A single simulation doesn’t, and can’t, account for changing both age and allocation within a single simulation as a retiree ages. Each of the “imagined” variables above should be done ahead of time for model portfolios (Probability Management / SIPMath) so that the optimum combination of those pre-simulated models may be selected by the programming based on each attained age to illustrate possible outcome ranges as the retiree is projected to age. Each point on the graphs below come from its’ own Monte Carlo simulation from each individual time period and portfolio value. Note that the ever widening fan shape from Monte Carlo simulations (see graph above) is more narrowed and focused in a model approach.

You may right click on the images, and choose “open image in a new tab” to enlarge it.
Retirement is fundamentally a time problem – time that should be modeled looking forward (statistical time left or remaining, counting number DOWN), instead of referencing backward anchored to a number of years in, or since, retirement (counting the numbers UP), or since the time a retiree retired (year’s since retirement). Since the mortality tables change shape internally, with age within the same table, slowly over time, those future time remaining calculations should adjust for each age as well – it’s a rolling model, with a decreasing number of years remaining to be funded. Allocation also changes statistical shape when allocation is matched to time remaining. Spending (BLS CEX) also changes statistical shape as retirees age. In other words, all of the “imagines” above change statistical shapes as retirees age and have less time remaining, time remaining that continuously rolls ahead of each retiree simply because they’re still living, albeit in ever shorter time periods. A retiree never reaches their expected longevity for present attained age, since by definition those living always have an expected longevity in the tables. It’s like a bow wave on a boat – it’s always there until the boat stops. A retiree doesn’t need as much “principal” to fund shorter time periods (elderly years) compared to what was needed to fund longer time periods (younger retirement years). Software should make annual adjustments as one ages to slowly transition between younger and elderly years. It’s those remaining years at all ages, based on attained age (looking forward), that need funding and should be modeled, not how many years since they’ve been retired (looking backwards).
Life is not one decision and then no more are needed into the future. Life is a series of decisions made with new information when it becomes known as we age. Modeling is much the same. As new data comes in each year, for portfolio characteristics, longevity, spending patterns, the model is redone each year. This allows for small adjustments to be made explicitly based on new calculations rippling through the model. More refined decisions are possible modeling than are possible with a set of rules of thumbs.
It’s not product, but process that is missing. The profession seeks a product to solve the lifetime retirement income problem. But a product today quickly becomes out of date over time – and thus the need to switch products (but when?) emerges. It is not a product, but a process that is needed. Modeling the process of retirement income over a retiree’s lifetime is what is missing. Process should focus on the retiree’s desired individual (single or couple) standard of living and the desired glide path of that standard (ref: BLS CEX).
The markets always go up and down, sometimes greater than other times. Just because anybody retires, does not mean that market behavior suddenly changes. No, sequence risk is always present and as one should expect the markets to continue with up and down swings in the future after retirement about as often and they experienced during their working years as accumulators. Yes, withdrawals do matter, however the above graph shows it is possible to model a range of that market behavior to calculate meaningful action, typically found to be small adjustments to spending (which tends to be a natural reaction during great market stress too) as the most meaningful adjustment to make. Models should pre-calculate at what point portfolio values suggest such small spending adjustments, both on the upside swings (e.g., when to do a house repair? or replace the auto?), as well as when to retrench spending on the downside swings and how much that new spending should be? Modeling should suggest actions ahead of time versus reacting at the moment, thus providing an ability to evaluate, in advance with better precision (2) that exists today, possible courses of action. Advisers and retirees shouldn’t need to reinvent the wheel reacting to market actions. Instead they should have pre-determined more precise points with meaningful evidence based actions already established and updated annually in a Distribution Policy Statement (DiPS).
Regardless of when you retire, markets are always uncertain. When does a retiree make adjustments with poorer than expected markets come? Answer: portfolio values and spending for such situations can be calculated using the very same methodology as retirement income calculations, only in reverse to determine a target portfolio value to support a decreased spending amount ahead of the event. What affects the allocation decision the most? Answer: Time (age) which can also be pre-calculated based on the imagines above too.
These imagines above are my combined hypothesis which I put out there to be proved, or disproved, in whole or in parts.
Though my focus has been on the use of stochastic, or Monte Carlo, simulations, other methods may also be used in the determination of each data point making up the data cloud used in the modeling described herein. The important point is the development and use of the data cloud in order for the modeling software to discern optimum allocation selection based on age and time period coming from the inputs of longevity percentiles, draw down caps (as shown below), spending ceilings and/or floors, and other model designs.
A paradigm shift in both the profession and the software supporting the profession is needed to make the next jump in perspective and application from calculations (currently an extension of deterministic thought with a stochastic overlay to do one calculation at a time … even for retirees retiring at different times) to modeling which is a full embrace of incorporating data from many stochastic processes that are organized by attained age of retirees. Much of that metalogged data may simply be a look up function as well. We’re close for those who dare lead that paradigm shift. Finally, planning community push back is quite the opposite of embracing evidence derived from research from our own planning profession, as well as that of other research and body of knowledge communities such as actuarial and statistical sciences as I’ve mentioned above.
Some in the profession may take exception at this post as they “protect their rice bowls” (quote comes from my first career was military officer, pilot and contingency war planner). It is my hope that this post leads to deeper thought and further exploration by others into how to apply scientific thought processes and insights from other sciences and professions to move the planning profession forward from the present limited views and approaches it currently has, possibly based on confirmation-based research rather than taking an exploratory-based or hypothesis based research approach which tends to broaden insights, findings and perspectives. The academic methodology is to disprove or duplicate prior research as well as to expand the body of knowledge in the subject area of investigation. The application of science and evidence should distill things down to fewer more refined models, rather than the many unduplicatable solutions apparent now that don’t provide specific answers but rather general rules and guidelines that constantly shift with the times (e.g., is it a 4% rule or 3% rule now, or when might it be 4.5%; or what is the inflation rate to apply to that rule; or how does that/those rule(s) change with age; or how does a retiree transition to another adviser say 5 or 10 years into retirement using any of the many rules out there since the retiree’s past isn’t known to the new adviser (models would at least have a future range of solutions very close to those modeled in the past, and thus transition would be more known to both the retiree and adviser with them simply picking up with where the retiree is now in their model and move forward from there), etc., etc.,?
The profession is currently too fragmented and fractured with a majority focused on sales and competition rather than advice and cooperation. It is no wonder to me why consumers have little trust of the profession when they get such a wide range of “answers” everywhere they go. Relying solely on likability isn’t a hallmark definition of any professional occupation. Modeling provides more insights than one-by-one simulations.
I hope you find this post thought provoking and that it may stimulate conversation and action within the practitioner community and necessary supporting sciences, not necessarily exactly as “imagined,” but at least to include the main themes and concepts to advance the profession towards more professionalization which means more standardization focused on planning versus product promotion towards helping the client. Clients are called patients in the medical profession which has more established processes, protocols, and procedures based on the embrace of a more evidence based approach deeply rooted in the application of many science disciplines. Doctors don’t measure themselves in terms of production; they measure themselves in terms of patient outcomes. Our profession needs to do the same by measuring successful retiree outcomes, both as they approach, transition into, and live through, retirement.
*Note that the model illustrated, above or below, doesn’t do all of the “imagines” above, but just “imagine” if it did, since it already illustrates the difference between the simulation-paradigm vs the model-paradigm with just some of the “imagines” above (life table percentiles. with the fixed variable of iteration failure rate for consistent data comparisons and rolling time frames, applied)! All values are expressed in “today’s” (real) dollars. 25th percentile represents continuous market returns below the 50th (expected) percentile, and 75th percentile represents continuous market returns above the 50th percentile. [ Note that these are example percentiles for illustration purposes. ] Further research based on metalogged portfolio standard deviations for each portfolio would be the next “imagined” refinement to specifically make software calculate specific spending adjustments based on specific portfolio values based on current, and future, ages. These specifically selected percentiles represent results through spending flexibility where actual spending may “wiggle” over the years somewhere between the upper and lower bound (though in clinical practice, a dramatic and feared yearly swing to spending has NOT been observed). These values are specifically calculated for spending based on portfolio values representative of those percentiles. This contrasts with the practice today of estimating percentage rules of thumb adjustments to spending. Note too that there is no “failure” defined by income at or above the “fixed income” (Social Security) level.
Add another of the “imagines” above, how actual retirees really do spend (BLS CEX), as a graph of comparison to how a specific retiree (you) may be able to spend, is illustrated below in order to get a better idea of how your actual spending may change as you age, for very useful insights for decision making about spending versus later portfolio values (arguably bequests, also illustrated, intended or unintended). Below is the same graph as above, with added perspectives of spending trends with age and veiling low probability ages of living to, based on present age.

You may right click on the images, and choose “open image in a new tab” to enlarge it.
Circling back to the spending “floor” or “ceiling” concept mentioned in Imagine #8 above, the conceptual floor would be a constant, level, spending line across the ages. However, with the insight of how retirees actually spend their money in retirement slowly decreasing with age, it makes sense to have both the “floor” or “ceiling” lines to slowly decrease in time as well, informed by using metalogged BLS CEX data by age. The contrast between the conceptual constant spending versus “real people” spending is graphed as an example in the graph above.

The graph above illustrates the “ceiling” concept by adjusting the length of the model time periods starting with the 25th longevity percentile (vs starting at the 45th in the prior two graphs) adjusting down to the 5th longevity percentile by age (e.g., 25% of the age group outlives the longevity table age) and putting a spending ceiling of 7% maximum draw down rate. This graph thus shows how the “slope” of spending and balances may be custom designed to each retiree’s future outcome desires through a modeling approach that influences the future years much more than the near term years. This example shows that constraining spending leads to higher balances at later ages. Is constraining spending today leading intentionally, or unintentionally, to greater adviser fee retention from this insight? Or is the retiree understanding these trade offs being discussed and the retiree decides? Modeling allows for a more nuanced and dynamic conversation with retirees about the interactive and interconnected nature between retirement spending and bequests.
The longevity tables change shape with age. Adjusting the longevity percentile slowly with age in a rolling manner slowly results in time periods the retiree probably won’t outlive. For example, starting with the median 50th percentile at age 60, e.g., 50% of that age group, single or joint, outlives the resulting end age), and adjusting the percentile down by 1 percentile each year as they age, leads to the 20th longevity percentile at age 90, where 20% at that age, joint or single, outlive that time period to that resulting end age. This also slowly adjusts the draw down rate, in a rolling manner, from a lower rate in younger years, to a higher rate in more elderly years. This is because the longer the withdrawal period, the lower the draw down rate. Use of the longevity tables directly connects statistical years of life to time periods for more refined computational purposes to allow for a smooth transition year by year for retiree spending. Unlike the RMD tables which have a bias towards a taxation agenda, direct use of the longevity tables connects spending rates with age. This also provides a standard with which to professionalize the profession further.
More model insights are expanded on in the sidebars and postscript sections below.
Sidebars from above footnote indicators:
(1) From the perspective coming from my approaching three decades in the profession as a practitioner with more than half that time as a researcher puzzling over what makes retirement income simulations and modeling tick, I don’t see the advances in approach and application in the financial planning profession that I see from advances in other professions and the sciences. Innovation in thought and application is slower in the planning profession. I believe this is due in no small part to the emphasis on software development, and the profession, focused on the sale of products, rather than an emphasis on models that support planning, advising and retiree decision making.
(2) Auditable data: Each simulation should contain a sufficient number of iterations such that a repetition of that same simulation would produce statistically similar iteration results when iteration percentiles are compared. Also, a sufficient number of iterations such that differences between each iteration percentile produces meaningfully different results between percentiles; specifically the end of year 1 percentile values for cash flows and portfolio balances for uses “imagined” above. Finally, a sufficient number of iterations to produce discernible draw down rate results down to at least two decimal places, which allows for visible differences between asset allocation results by age, and visible differences between draw down rates by age as well. Yes, such precision is possible when evaluating simulation iteration percentiles at the end of year one in simulations; and comparable between different simulations when the iteration failure rate is held constant as a control variable. Draw down rate form a three dimensional “data cloud” where age (time), allocation characteristics, iteration failure rate (commonly understood under the present paradigm as probability/possibility of failure), all derive the draw down rate (also called withdrawal rate).
Modeling software should be programmed to optimize the drawdown, by using the “data cloud” to simultaneously compare alternative efficient allocations at the present and all future ages too. Present software requires a hunt and peck approach to compare allocation choices, not only at the present age of the retiree, but at any future age as well. Not only this, but the connection between today’s age and any future age is missing for both cash flows and portfolio balances after fees. Today’s approach is simply one data point among many other allocation possibilities for present retiree age, with no data points available for decision making for future ages. Additionally, there is no consistent use of life tables to determine a more statistical use of longevity data, and strategic use of longevity percentiles, in determining a statistically useful age in the first place. A data cloud would compile data once and be able to be used often (Run Once Use Often) with annual updates to the data and thus the data cloud. The hunt and peck approach to planning is long overdue for improvement.
Draw down precision is currently missing in the present paradigm, especially to two or more decimal places. Draw down precision allows for strategic use of iteration failure rates, contrasting 10% versus 25% data clouds for example, which can then be used for pre-calculated decision rules that are specific to present portfolio values and allocation characteristics, to develop a Distribution Policy Statement (DiPS) updated annually. The 25% data cloud would signal a target portfolio balance to trigger a spending retrenchment suggestion, as well as to what specific spending level (cash flow) is then suggested. Clinical application of this approach shows that spending retrenchment is often less than imagined or feared, and is often an automatic behavioral reaction of retirees clinically evidenced in both the 2008 and 2020 market and economic downturn episodes. A DiPS should address the behavioral aspects of retirement, supported by both process and structure, for the whole of the present year, computed in advance at the beginning of each year, to address what do you do, when portfolios values reach $X, $Y, and $Z dollar values (beyond simply “stay invested” and “cut spending,” though those are behavioral steps too – but when and why supported by computations of specific portfolio balances for the year in advance is what is lacking)? Modeling should automatically address this to produce a DiPS document.
(3) Drawdown rates depends on certain set factors which are already established by using efficient model portfolios as well as strategic use of the other factor inputs. These factors are: 1) portfolio statistical characteristics (ideally metalogged) that don’t change until dynamically updated periodically, say annually, since they’re efficient models by definition. 2) length of time the simulation is run over that is determined by current age to longevity table determined end age (the difference in years between derived table end age and present retiree age(s)). By the way, longevity tables also have percentile statistics that can be strategically used to slowly increase the table age where fewer and fewer survive past; a rolling adjustment of that surviving age percentile which slowly extends the spending time frame to reduce possibility of outliving portfolios (excluding catastrophic spending which can’t be predicted). 3) Iteration failure rate which standardizes the output of simulations for apples to apples comparison when the same iteration failure rate is used to develop the “data cloud.” It takes the three dimensions just discussed above to derive the intersecting fourth point (the “answer”). In other words, a three-dimensional model making up the data cloud comprised of all those “fourth point” solutions from which to compare and contrast solutions based on optimal solutions and desired retiree outcomes. How can anyone provide a computational answer under the current paradigm when each of the three dimensions above are only loosely defined by rules of thumbs (e.g., 4% rule, 60/40 allocation, age 95, or variations thereof)? The answer is discussed in the professionalization postscript below.
A “data cloud” consists of all of the compiled answers and auditable data of each and every simulation run for all of the different allocations and all of the different time frames. Software then looks up the optimal solution between allocation data points corresponding to each time frame, where the time frame is derived from longevity tables (longevity table percentile end age minus each attained age modeled). There is no need to re-simulate solutions since they have already been done and archived in the data cloud for subsequent use in each cash flow model run.
Postscripts:
PS. Three other points. First, replacing lost survivor income at any age, either pre- or post-retirement, is like retirement income planning in that the objective is to determine the amount of assets (possible asset sources: life insurance, reverse mortgage, portfolio asset set aside, downsize, etc.) needed to replace that lost income, supposedly for the remainder of life, or could be for shorter periods of time. Other income sources reduce the amount needed for that replacement. Second, barring catastrophic spending, the objective to continue to have assets to support income regardless of age while alive, means there are assets remaining at attained age to fund those remaining years. This means that estate planning should be kept current to bequest those assets when that time inevitably comes to do so. This too is at any age, young or old. Such an approach is akin to buying an income annuity, for one year at at time, with the portfolio balance acting as personal mortality credits to purchase the subsequent year all the way through to the end of the longevity table in use. As long as the retiree is alive, there needs to be portfolio value reserved to support those future years from the longevity table, year by year in a rolling manner. Third, fees come from portfolio balances as do cash flows. These should be computed separately for each simulation iteration percentile so that higher portfolio balance iteration percentiles generate appropriate fees and cash flows for each iteration, and lower fees and cash flows for lower iteration percentile portfolio balances. Additionally, adjusting these for each age too. Each age and iteration would have its’ own fee and cash flow calculation to more closely model reality of aging along with a range of potential portfolio balances as a retiree ages. These model estimates would adjust with each dynamic update of the data within the model over time, thus updating the model estimates with age too.
Speaking of fees, method of fee should be modeled appropriately as well. Net retiree income depends on the fee method as the below example graphic shows. Note the use of 4% draw down rate below is for example only to contrast the outcome of how a retiree pays their adviser fee. The actual draw down rate would vary by age as described here in this post.

The draw down and/or fee percentages scale proportionally regardless of portfolio balance. Fees are not a constant across the model since model percentiles with higher portfolio balances will have proportionally higher fees, and vise versa for lower percentile portfolio balances. Cash flows thus differ at each model age due to what portfolio balance may be modeled by iteration percentile. This is especially true for blended fee schedules, but still true for flat percentage fee schedules as well. Finally, taxation of fees from qualified retirement plans is different between the upper and lower examples in the graph above.
PPS. I’m a planner rather than a software programmer. I lead through thought experiments (a discipline coming from my university physics days) supported by existing research from many disciplines already out there as well as developing some of that research myself. The application of all that research from many disciplines should be picked up by the software programming community (and requested by the planning community). All of the above “imagines” are within reach. They simply need programming that puts them all together to form a model approach versus current day single calculation approach. The profession is unaware of the potential – but waiting. The software community is unaware they should develop and supply such capability due to present software package’s perceived demand, i.e., needs or desires, resulting from the existing old paradigm both communities have. Finally, planning community push back is quite the opposite of experimentation and embracing evidence derived from research. What is lacking in the financial planning community is a rigorous application of the scientific method beyond the common 30 year view, or the 60/40 stock/bond allocation as I’ve mentioned above (of which few 60/40 allocations are comparable to any other 60/40 allocation due to degree of efficiency of the combined components). As I’ve mentioned, there are other time periods and other allocations that come into a comprehensive model to replicate aging properly. The allocation glide path decreases equity exposure with increased age when shorter time periods are compared to longer time periods. Arranging allocation data is often reversed when viewed through the prism of years since retirement, with longer time period data following shorter time period data. But that is not how time periods fall through the prism of time arranged by age using life tables.
PPPS. How does financial planning become a profession? One step is to develop standardized processes that can be calculated and audited. As it stands now, the “rule of thumb” approach using X% withdrawal rates subsequently adjusted for inflation of some amount, leads to an issue when that retiree changes advisers (due to their own choice or adviser retirement, etc.) in that the new adviser (or retiree themselves) has no point of reference as to how the retiree arrived at their current spending cash flow amount because it is based on some point in the past (“years since retirement”), neither is cash flow directly related to actual portfolio value once the retirement is put into motion. There’s a disconnect between current cash flow and current portfolio value. Under a “time since retirement,” …how does an adviser compare retirees who retired at 60, 65, 70, or any other age, to each other since they would all have different “times since retirement” time frames when viewed by the advisor? There is a lot of “Kentucky windage” needed under the “rules of thumb” approach when each retiree has a different time frame since they retired and thus different timing of past adjustments to get to what income they’re receiving presently. The modeling methodology imagined above keeps, at all times and at all ages, current cash flows aligned with current portfolio values calculated for each age specifically since it is modeled looking forward at all points in time including ages the retiree is presently unlikely to outlive by explicit use of life tables. Updating data each year tweaks the inputs to tweak the output slowly over time, through a process I call “Dynamic Updating.”
Which life table(s) to use? Any or all because each table is a measurement of a subset of the same population. For example, the Social Security tables might be viewed as the whole population (at least as far as a large part of the population receives Social Security) and thus it includes a spectrum of more healthy mixed with less healthy, and annuity tables as a subset of the healthier subset of that same population. Which table represents each specific retiree sitting in front of the adviser? Again, modeling in a manner that more closely fits each retiree. Survivor income for either of a couple after death of the other is also possible by modeling the asset/insurance/reverse mortgage/annuity need to replace that lost income. For example, there’s an automatic income cut when one of a couple dies. What’s the lost income from that pay cut? What is the lump sum dollar amount needed to replace that loss? What is the source for that lump sum need?
PPPPS. Retirement researchers tend to be “young.” By “young,” I mean they’re still working and not yet retired themselves. By definition, retirement would end their research years. I bring this up through the observation that research from “young” researchers, tends to be biased toward viewing the retirement problem as a long term problem; in other words, 30 year time frames biases the results because of those longer term time frames. Why is research only over 30 year periods? Why not include shorter time periods appropriate for older retirees? Or include longer time periods for those retiring in their early 60’s, or forced to retire in their late 50’s or even earlier? How does the current paradigm transition from longer to shorter simulated time periods? Retirement planning is not a rolling period of 30-year simulations, implying retirement never ends; it eventually does. Modeling shouldn’t be over a single time period either. Shorter time frames produce different results compared to longer time frames; results which are not apparent since researchers are not considering shorter time frames that would apply to more elderly retirees in their research. Statistically, there is a larger number of retirees with less than a 30 year time frame; and a small subset with longer then 30 year time frames. So a 30 year time frame really is applicable for a very small subset of retirees when viewed through the lens of longevity tables. Why would a 75 year old (or older) use a 30 year retirement period? You can prove this time frame bias to yourself by running shorter retirement periods. 1) You get different results than you do from longer periods. 2) You’re questioning yourself in that the shorter period ends at an earlier age – one that you may expect to outlive now – so you throw out that different result as “not applicable.” This proves my point – your bias is based retiree (or researcher in this segment) age today; not based on applying a time frame applicable to a retiree when they’re older where that shorter period takes them to an older age you can’t imagine outliving! Practitioners in the profession also have a client base with a bell curve with the median age near their own age – and since they’re not retired, their views are also biased to longer terms that are appropriate for younger retirees (those in their 60’s and early 70’s), but don’t really apply to more elderly retirees (who may not be in their client base yet). Thus researcher and practitioner focus tends to lack a model that transitions retirees from 30 years of retirement, into 25 years, then into 20, 15, 10 and even 5 year terms. In fact, models should transition annually for smoother transitions between ages in a methodical manner that directly connects draw down, and simultaneously compares alternatives, to present balance, allocation and age between each point throughout the model. There is no model approach existing today that smoothly transitions, within the model, from longer to shorter retirement periods. There’s no model that applies to more elderly retirees today; yet there are younger and older retirees coexisting along the whole spectrum of ages. How does a retiree transition, incrementally year by year by year, from longer time frames, into shorter time frames, that apply to more elderly retirees as they age? There is no modeling software that shows how a younger retiree today may age into that older retiree. Nor is there modeling software today that shows that older retiree today, how to transition into that even older retiree later on. There’s no modeling software to day to evaluate and assist that even older retiree in their decision making into even older ages for them either. Again, different retiree ages coexist at the same time. Programmers and practitioners shouldn’t just be looking at those retirees in their 60’s or 70’s, but also those who are, or will be, in their 80’s, 90’s or 100’s too. More importantly, how to transition year by year from any age to each age thereafter? Software that models that aging transition is a practical guide for making transitional decisions slowly over time via “dynamic updating.”
Epiloge
The above may seem “too complicated.” Modern surgery, for example, has advanced considerably over the decades unhindered by “that’s too complicated” thoughts. Computer processing power and software programming has come a long way in the past couple of decades and can manage the above imagines in a combined and fluid manner. Retirement too is a process that is more complicated than one-at-a-time single-period simulations or calculations viewed alone as answers. Alternatives can be compared by programming and optimal solutions modeled. and adjusted from there, instead of doing this manually one by one through one at a time simulations over single periods as mentioned above. It is time the profession combine present research insights into a unified model, that incorporates aging directly as well, to advance the profession further.
I challenge the reader to adjust their paradigm to data cloud modeling rather than try to squeeze all of the above into the present single, one by one by one, simulation paradigm where different allocations are compared one by one, and different time frames are compared one by one, and transitions over time (that models aging) of both allocations and time frames at the same time are not. A data cloud approach can easily choose between data points since the simulations have already been done. Why keep repeating simulations during the course of the year as if the range of iteration outcomes may suddenly change given the same data inputs? The data cloud changes when the data changes from year to year data reviews.
This isn’t meant to diminish the research work of all those who have contributed to the body of knowledge. This acknowledges those key steppingstone insights, and extends and expands on those with a nudge towards lessor unexplored directions via an important, and arguably necessary, next step shift in paradigm to advance the profession and its’ processes further into more refined and advanced modeling.
Only that which is untried remains unknown. That which is newly tried often leads to unimagined insights.
Note: The 3D nature of retirement income withdrawals from portfolios over time and allocation was duplicated through a different methodology in:
Suarez, E. Dante. 2020. “The Perfect Withdrawal Amount Over the Historical Record.” Financial Services Review, The Journal of Individual Financial Management. Volume 28, No.2: 96-132, Figure 26.
Once of the goals many people imagine is having a steady income in retirement. That’s ideal, but as alluded to in the “imagine” post above, it is not how retirees spend their money according to the Consumer Spending Index data from the BLS.
Even while working, people often forget how their income varied … they can see this themselves by looking at their own Social Security Statement earnings history page. The Illustion of Steady Income:
https://blog.betterfinancialeducation.com/behavior-corner/the-illusion-of-steady-income/
Today’s paradigm has TIME (number of years used in the Monte Carlo (stochastic) simulation) as the control variable to compare results. Time is NOT fixed, but changes as one ages, especially when period life tables are used for this purpose. Thus, what should the control variable be for comparison of results? Iteration Failure Rate (IFR) of the stochastic simulations.
Some have misconstrued the discussion above as a critique of past research. Not so. Most of the research is good. The critique is that the data is arranged backwards – counting SINCE retirement paradigm. I have yet to come across a retiree who’s trying to make today’s retirement decision based on having retired say 7 years ago for example. They’re more interested in a decision based on how many years they’re expected to still be spending … in other words based on expected longevity NOW, not THEN. The paradigm shift is counting years AHEAD, not behind. https://blog.betterfinancialeducation.com/sustainable-retirement/just-what-should-an-annual-checkup-do-for-you-during-retirement/
In other words, it is a rolling decision. Data should be arranged with the longest time interval FIRST, going down to the shortest interval (not just a simple 30 year research period for example, or an other single and set period). That’s the paradigm shift referred to above. As one ages, the time period shortens.
Longevity in the tables is much like the bow wave in front of the boat. You don’t catch the bow wave until the boat stops! As long as one is alive, there are statistical years ahead in the life tables (at least for those that go beyond age 100). When you stop, future funding of future retirement years is a moot point. It then becomes a survivor or heir plan with remaining assets (which you shouldn’t outlive barring catastrophic spending) if annual computations adopt the rolling longevity approach as described in “Transition Through Old Age in a Dynamic Retirement Distribution Model.”
https://www.betterfinancialeducation.com/6th-paper-managing-retirement-income-very-old-ages
The fundamental takeaway is to ask yourself (as the researcher) or ask the researcher, “How would your conclusions change if you arranged your data such that longer periods come first, not last, and shorter time periods for withdrawals/drawdowns come last, not first?” Why? Because an longevity age-based paradigm recognizes that as one ages, time frames shorten and what should you do looking forward, not backwards, is what you’re trying to decide as you continue to age and review the health of your spending plan.
Just had a discussion with someone who didn’t quite get the nuance between how a SINGLE Monte Carlo simulation over a SET period (e.g., 30 years) is different than what I describe above.
The difference:
The cash flow and portfolio balances in the SINGLE SET method of today is ALL simulated WITHIN that single simulation over 30 years in this example. So, aging 10 years, and then again another 10 years … how does that cash flow change between those ages? What is the range of possible choices over those years (single simulations have ever widening ranges of possibilities seen in the simulation results). The metaphor graphic for long division above shows simulation iterations for single period simulations.
The cash flow and portfolio balances as described above is a CONNECTED SERIES OF 30 SEPARATE simulations, where each of the 30 simulations represent the different time periods from the longevity tables* COMBINED with selecting the optimum portfolio allocation for the time period too.
*As described above, there are more than 30 time periods possible IF you continue to live (or less than 30 if you don’t); which is the fear today of long longevity that is tried to be solved at the beginning of retirement – I argue that uncertainty can be resolved slowly over time AS ONE AGES. This is the boat’s bow wave metaphor above.
Two sets of uncertainty can be unified under the methodology described above: Probability of the Portfolio (portfolio statistics) and Probability of the Person (longevity statistics).
“The statistician George Box said that “All models are wrong, but some are useful.” Dwight Eisenhower, supreme Allied Commander in WWII, said that “Plans are nothing; planning is everything.” I say that models are nothing; modeling is everything, because it will help you … figure out what is going on here. ”
https://www.probabilitymanagement.org/blog/2020/12/01/the-axiomatic-fallacy-fallacy
Quote from Dr. Sam L. Savage, Executive Director of Probability Management.org, a 501(c)(3) nonprofit devoted to making uncertainty actionable. Dr. Savage is author of The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty (John Wiley & Sons, 2009, 2012). He is an Adjunct Professor in Civil and Environmental Engineering at Stanford University and a Fellow of Cambridge University’s Judge Business School. He is the inventor of the Stochastic Information Packet (SIP), an auditable data array for conveying uncertainty. Dr. Savage received his Ph.D. in computational complexity from Yale University.
Discussion Point: The Cash Flow WITHIN a single fixed time period (derived from a “rule of thumb” approach to longevity) Monte Carlo calculation is NOT the same as Cash Flow BETWEEN a connected series of Monte Carlos calculations with decreasing time periods that model aging (time periods come directly from a STRATEGIC use of longevity period life tables).
If one arranges data output on graphs based on age, i.e., putting their age on the x-axis representing the time function, it changes the perception as to how to arrange the data. That is Age(s) (e.g., 67, 68, … 70 … 75 … etc.), instead of number of year (e.g., 1, 2, 3 … 5 … 10 … 15 … etc.) or the year itself (e.g., 2021, 2022 … etc.). An important, but subtle, shift in perception and paradigm.
It is well known that longer duration leads to lower drawdown rates. So the question is, when do those longer, versus shorter, durations apply?
In other words, do longer durations apply at the END of the data stream (and thus influence our graphical depiction and thus interpretation, of the data as displayed)? This is how data and graphs are depicted presently and is an extension of our human minds wanting to count the years up … “years since” or “year in” retirement, or “years of” or “duration of” retirement. All examples of counting the years up.
Or do longer durations apply at the BEGINNING of the data stream (and thus influence our graphical depiction and thus interpretation, of the data as displayed)? When an age-based approach is applied by using life tables statistics to derive the time frame, this results in data being graphed and depicted with longer terms at the beginning of the graph or data tables, rather than longer terms at the end … as one ages into shorter terms over time. The ah-ha comes from seeing ages and data aligned together graphically and in data tables.
AGEto AGE with derived drawdown rate (not rate of return)*
61 94 4.13%
62 94 4.20%
63 94 4.28%
64 94 4.35%
65 94 4.43%
66 95 4.43%
67 95 4.51%
68 95 4.60%
69 95 4.71%
70 95 4.82%
71 95 4.94%
72 96 4.94%
73 96 5.07%
74 96 5.22%
75 96 5.38%
76 97 5.38%
77 97 5.56%
78 98 5.56%
79 98 5.75%
80 98 5.97%
81 98 6.22%
82 98 6.51%
83 99 6.51%
84 100 6.51%
85 100 6.84%
86 100 7.19%
87 101 7.19%
88 101 7.60%
89 101 8.07%
90 102 8.07%
91 103 8.07%
92 103 8.62%
93 104 8.62%
94 104 9.27%
95 105 9.27%
96 106 9.27%
97 106 10.03%
98 107 10.03%
99 107 10.99%
100 108 10.99%
101 108 12.19%
102 109 12.19%
103 110 12.19%
104 110 13.72%
105 111 13.72%
*Drawdown rates and age-based longevity are data sensitive. Above is for a specific 40% equity/60% bond portfolio using historic data from 1972 to 2020 and a same ages joint couple.. Illustration purposes only to show how the time between ages shorten with aging through a strategic use of period life table percentiles, as well as how the drawdown rate slowly increases through the aging process.
Published research papers for evidence of 3D and use of control variables:
“An Age-Based, Three-Dimensional Distribution Model Incorporating Sequence And Longevity Risks,” Journal of Financial Planning, March 2012, by Larry R Frank Sr, John B Mitchell, and David M. Blanchett.
“Probability-of-Failure-Based Decision Rules to Manage Sequence Risk in Retirement,” Journal of Financial Planning, November 2011, by Larry R Frank Sr, John B Mitchell, and David M. Blanchett.
“The perfect withdrawal amount over the historical record,” Financial Services Review, 2020 (28) 96–132, by E.D. Suarez. Figure 26 confirms 3D results using a completely different methodology.
Attended an interesting webinar yesterday. Indeed it is known that spending early influences balances later (legacy or bequest values). There’s an implicit choice people make if they choose to spend more than prudent (lower bequest), or spend less (higher bequest). One can’t see that implicit choice when “age to age” calculations are performed one by one, one year at a time. It is not seen because the cash flows and balances WITHIN a simulation are NOT THE SAME as cash flows and balances BETWEEN simulations. The latter is a model where deeper insights are made.
On the topic of legacy. There’s a mental trick that is played on our brains in early retirement where the potential bequest appears large. It appears large because some of the money, I’ll call it “principal,” isn’t really bequest money – it’s lifestyle spending money for later years. As one ages, the principal needed decreases, because there are fewer years remaining that need lifestyle spending. The bequest amount is only really visualized when it is separated either by math (subtract it out in modeling), or by account where that account is not included in the modeling. Both approaches lead to the same results; only the latter being where the bequest can be clearly “seen” on statements and reports.
Modeling the interplay over time between lifestyle today for you, versus bequest for others makes this much clearer than single computations done year by year as one ages.