kode adsense disini
Hot Best Seller

How to Measure Anything: Finding the Value of "Intangibles" in Business

Availability: Ready to download

Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Me Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Measure Anything provides just the tools most of us need to measure anything better, to gain that insight, to make progress, and to succeed." -Peter Tippett, PhD, M.D. Chief Technology Officer at CyberTrust and inventor of the first antivirus software "Doug Hubbard has provided an easy-to-read, demystifying explanation of how managers can inform themselves to make less risky, more profitable business decisions. We encourage our clients to try his powerful, practical techniques." -Peter Schay EVP and COO of The Advisory Council "As a reader you soon realize that actually everything can be measured while learning how to measure only what matters. This book cuts through conventional cliches and business rhetoric and offers practical steps to using measurements as a tool for better decision making. Hubbard bridges the gaps to make college statistics relevant and valuable for business decisions." -Ray Gilbert EVP Lucent "This book is remarkable in its range of measurement applications and its clarity of style. A must-read for every professional who has ever exclaimed, 'Sure, that concept is important, but can we measure it?'" -Dr. Jack Stenner Cofounder and CEO of MetraMetrics, Inc.


Compare
kode adsense disini

Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Me Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Measure Anything provides just the tools most of us need to measure anything better, to gain that insight, to make progress, and to succeed." -Peter Tippett, PhD, M.D. Chief Technology Officer at CyberTrust and inventor of the first antivirus software "Doug Hubbard has provided an easy-to-read, demystifying explanation of how managers can inform themselves to make less risky, more profitable business decisions. We encourage our clients to try his powerful, practical techniques." -Peter Schay EVP and COO of The Advisory Council "As a reader you soon realize that actually everything can be measured while learning how to measure only what matters. This book cuts through conventional cliches and business rhetoric and offers practical steps to using measurements as a tool for better decision making. Hubbard bridges the gaps to make college statistics relevant and valuable for business decisions." -Ray Gilbert EVP Lucent "This book is remarkable in its range of measurement applications and its clarity of style. A must-read for every professional who has ever exclaimed, 'Sure, that concept is important, but can we measure it?'" -Dr. Jack Stenner Cofounder and CEO of MetraMetrics, Inc.

30 review for How to Measure Anything: Finding the Value of "Intangibles" in Business

  1. 4 out of 5

    Takuro Ishikawa

    The most important thing I learned from this book: “A measurement is a set of observations that reduce uncertainty where the result is expressed as a quantity.” Finally! Someone has clearly explained that measurements are all approximations. Very often in social research, I have to spend a lot of time explaining that metrics don’t need to be exact to be useful and reliable. Hopefully, this book will help me shorten those conversations.

  2. 4 out of 5

    Yevgeniy Brikman

    As an engineer, this book makes me happy. A great discussion of how to break *any* problem down into quantifiable metrics, how to figure out which of those metrics is valuable, and how to measure them. The book is fairly actionable, there is a complementary website with lots of handy excel tools, and there are plenty of examples to help you along. The only downside is that this is largely a stats book in disguise, so some parts are fairly dry and a the difficulty level jumps around a little bit. As an engineer, this book makes me happy. A great discussion of how to break *any* problem down into quantifiable metrics, how to figure out which of those metrics is valuable, and how to measure them. The book is fairly actionable, there is a complementary website with lots of handy excel tools, and there are plenty of examples to help you along. The only downside is that this is largely a stats book in disguise, so some parts are fairly dry and a the difficulty level jumps around a little bit. If you make important decisions, especially in business, this book is for you. Some great quotes: Anything can be measured. If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before. And those very things most likely to be seen as immeasurable are, virtually always, solved by relatively simple measurement methods. Measurement: a quantitatively expressed reduction of uncertainty based on one or more observations. So a measurement doesn’t have to eliminate uncertainty after all. A mere _reduction_ in uncertainty counts as a measurement and possibly can be worth much more than the cost of the measurement. A problem well stated is a problem half solved. —Charles Kettering (1876–1958) The clarification chain is just a short series of connections that should bring us from thinking of something as an intangible to thinking of it as a tangible. First, we recognize that if X is something that we care about, then X, by definition, must be detectable in some way. How could we care about things like “quality,” “risk,” “security,” or “public image” if these things were totally undetectable, in any way, directly or indirectly? If we have reason to care about some unknown quantity, it is because we think it corresponds to desirable or undesirable results in some way. Second, if this thing is detectable, then it must be detectable in some amount. If you can observe a thing at all, you can observe more of it or less of it. Once we accept that much, the final step is perhaps the easiest. If we can observe it in some amount, then it must be measurable. Rule of five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. An important lesson comes from the origin of the word experiment. “Ex- periment” comes from the Latin ex-, meaning “of/from,” and periri, mean- ing “try/attempt.” It means, in other words, to get something by trying. The statistician David Moore, the 1998 president of the American Statistical Association, goes so far as to say: “If you don’t know what to measure, measure anyway. You’ll learn what to measure.” Four useful measurement assumptions: 1. Your problem is not as unique as you think. 2. You have more data than you think. 3. You need less stated that you think. 4. And adequate amount of new data is more accessible than you think. Don’t assume that the only way to reduce your uncertainty is to use an impractically sophisticated method. Are you trying to get published in a peer-reviewed journal, or are you just trying to reduce your uncertainty about a real-life business decision? Think of measurement as iterative. Start measuring it. You can always adjust the method based on initial findings. In business cases, most of the variables have an "information value" at or near zero. But usually at least some variables have an information value that is so high that some deliberate measurement is easily justified. While there are certainly variables that do not justify measurement, a persistent misconception is that unless a measurement meets an arbitrary standard (e.g., adequate for publication in an academic journal or meets generally accepted accounting standards), it has no value. This is a slight oversimplification, but what really makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong. Whether it meets some other standard is irrelevant. When people say “You can prove anything with statistics,” they probably don’t really mean “statistics,” they just mean broadly the use of numbers (especially, for some reason, percentages). And they really don’t mean “anything” or “prove.” What they really mean is that “numbers can be used to confuse people, especially the gullible ones lacking basic skills with numbers.” With this, I completely agree but it is an entirely different claim. The fact is that the preference for ignorance over even marginal reductions in ignorance is never the moral high ground. If decisions are made under a self-imposed state of higher uncertainty, policy makers (or even businesses like, say, airplane manufacturers) are betting on our lives with a higher chance of erroneous allocation of limited resources. In measurement, as in many other human endeavors, ignorance is not only wasteful but can be dangerous. If we can’t identify a decision that could be affected by a proposed measurement and how it could change those decisions, then the measurement simply has no value. The lack of having an exact number is not the same as knowing nothing. The McNamara Fallacy: The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily isn’t important. This is blindness. The fourth step is to say that what can’t easily be measured really doesn’t exist. This is suicide. First, we know that the early part of any measurement usually is the high-value part. Don’t attempt a massive study to measure something if you have a lot of uncertainty about it now. Measure a little bit, remove some uncertainty, and evaluate what you have learned. Were you surprised? Is further measurement still necessary? Did what you learned in the beginning of the measurement give you some ideas about how to change the method? Iterative measurement gives you the most flexibility and the best bang for the buck. This point might be disconcerting to some who would like more certainty in their world, but everything we know from “experience” is just a sample. We didn’t actually experience everything; we experienced some things and we extrapolated from there. That is all we get—fleeting glimpses of a mostly unobserved world from which we draw conclusions about all the stuff we didn’t see. Yet people seem to feel confident in the conclusions they draw from limited samples. The reason they feel this way is because experience tells them sampling often works. (Of course, that experience, too, is based on a sample.) Anything you need to quantify can be measured in some way that is superior to not measuring it at all. —Gilb’s Law

  3. 4 out of 5

    Jurgen Appelo

    297 references to risk, and only 29 references to opportunity. No mention of unknown unknowns (or black swans), and no mention of the observer effect (goodhart's law). A great book, teaching you all about metrics, as long as you ignore complexity.

  4. 4 out of 5

    Nils

    An OK popularization of measurement techniques. But it downplays the key issue—which is data quality challenges, of which there are at least two types. The first is the "moneyball" type: a phenomenon where we know intuitively that there are important differences in measurable outcomes but we lack statistically significant explanations. The challenge here is to find things to measure that are consistently revealing of the phenomenon you are ultimately interested in measuring (say team wins). Maki An OK popularization of measurement techniques. But it downplays the key issue—which is data quality challenges, of which there are at least two types. The first is the "moneyball" type: a phenomenon where we know intuitively that there are important differences in measurable outcomes but we lack statistically significant explanations. The challenge here is to find things to measure that are consistently revealing of the phenomenon you are ultimately interested in measuring (say team wins). Making it harder is that sometimes you need to build a supercollider in order to measure the phenomenon in question, and for many reasons that may not always be feasible. Data collection is expensive, in many ways, not least socially: new forms of measurement of social activities (including business activities) threaten those who benefit from status quo. The second data quality challenge is more insidious, the "deviant globalization" type: we have the data, or some data, but it is hopelessly and often intentionally corrupted or compromised, since there are actors who have an active interest in obscuring measurement. This is true about almost all information related to morally questionable activities, for example, from sex to drugs to theft. But it's not just there: any sales manager trying to accurately gauge the size of his reps' pipeline is intimate with the problem of trying to extract accurate data. In sum, the book is fine on the technique side, but naive about what we may call the social epistemologies.

  5. 5 out of 5

    Martin Klubeck

    I really like this book. Hubbard not only champions the belief that anything can be measured, he gives you the means (the understanding of how) to get it done. I have used his book on numerous occasions when tackling some difficult data collection efforts. Hubbard's taxonomy and mine don't fully jive, but that's a minor point; I found much more to like than not. I like to highlight and make notes in good books...this book is full of both. I especially like one of his "useful measurement assumpti I really like this book. Hubbard not only champions the belief that anything can be measured, he gives you the means (the understanding of how) to get it done. I have used his book on numerous occasions when tackling some difficult data collection efforts. Hubbard's taxonomy and mine don't fully jive, but that's a minor point; I found much more to like than not. I like to highlight and make notes in good books...this book is full of both. I especially like one of his "useful measurement assumptions." I think it sums up the book nicely: "There is a useful measurement that is much simpler than you think." This book helps you find the simple answer to the daunting problem of "how to measure" something. Another section I like a lot is how to "calibrate estimates" - basically it gives really useful, hands-on techniques for getting better at guessing. This is a great tool, not only for measuring, but for any role that requires good estimating. Nothing is perfect, and Hubbard has at least one chapter where I think he failed to simplify life - his chapter on measuring risk was too complicated (unless you are a statistician). Bottom line? Great book - especially for those tasked with collecting the data necessary to measure stuff!

  6. 5 out of 5

    Marcelo Bahia

    An excellent read. It could be summed up as a "basic statistics for business" book, although it definitely goes beyond that in many aspects. As the title suggests, throughout the whole book the author strongly defends the case that everything can be measured, even though the method may not be obvious at first glance. The book structure basically consists of the explanations of why this is so and various examples and methods that should help the reader to deal with many types of such problems. Alon An excellent read. It could be summed up as a "basic statistics for business" book, although it definitely goes beyond that in many aspects. As the title suggests, throughout the whole book the author strongly defends the case that everything can be measured, even though the method may not be obvious at first glance. The book structure basically consists of the explanations of why this is so and various examples and methods that should help the reader to deal with many types of such problems. Along the way, writing is very clear and reading is more pleasant than you would expect from a "statistics book". This is so because much of the value-added of the book comes not from the quantitative side (which is actually quite basic statistics, something that I see as positive in the context of the book), but from the qualitative analysis and differentiated viewpoint of the author under various circumstances. Actually, he seems knowledgeable and is pretty insightful most of the time, and I expect that the usefulness of each of these insights will depend on your current career and experience. Having worked as a financial analyst in the Brazilian financial markets for the past 8 years, for me the 2 most interesting insights were: 1) His definition of measurement as any number or figure that reduces risk compared to your previous state. I consider this REALLY important in the workplace, as most people consider valid measurements only those ones which can be precisely quantified, preferring ignorance over possible risk-reducing wide-range estimates in all other situations. 2) Due to the above misconception of the definition of measurement, people neglect measurements and estimates exactly in the situations in which they are more useful. When you don't know anything, any imprecise estimate will reduce risk and add value! Looking back, this non-obvious insight is precisely what we needed when facing some specific analytical and decision-making problems in my firm. Overall, this is one of the most interesting books I've read in the past few months, and it should be a great investment of time & money to any professional that mildly deals with quantitative problems at work.

  7. 4 out of 5

    Alok Kejriwal

    How to Measure Anything - Book Review. A mentally challenging yet incredibly enlightening book. What’s impressive about the content? - The Art and Science of making guesses. - The ability to use well thought through assumptions and estimate outcomes. - Early examples of the book of legends such as Fermi who asked his students to estimate the number of piano tuners in Chicago (more like the questions you supposedly get asked in a google interview ?) - Bayes Theorem and Bayesian thinking. It's NERDY bu How to Measure Anything - Book Review. A mentally challenging yet incredibly enlightening book. What’s impressive about the content? - The Art and Science of making guesses. - The ability to use well thought through assumptions and estimate outcomes. - Early examples of the book of legends such as Fermi who asked his students to estimate the number of piano tuners in Chicago (more like the questions you supposedly get asked in a google interview ?) - Bayes Theorem and Bayesian thinking. It's NERDY but essential. - Profound amazing examples of how you DON'T have to have too much data to analyse things. - How to INVENT metrics. How the Cleaveland Orchestra started counting 'standing ovations' to measure the success of its new conductor. - The importance of the Confidence Interval (CI). - MONTE CARLO simulations! - How Amazon introduced free wrapping to figure out how many books were gifts! - Q's like: How would you measure the number of fishes in a lake? This a MATH heavy book that takes a LONG time to read. If you don't like numbers & formulas (the book is FULL of them), I suggest you still buy the book and understand what you want.

  8. 4 out of 5

    Steve Walker

    There is a lot of good information here but it is more of a text book and very dry. I read this book because I have to make decisions every day. Some decisions are very easy because I have the intell and facts that make the decision for me. But other decisions aren't so easy. What are my "real" risks? How do I separate emotion from a decision? What about all the things involved that can't be measured? Ah, that is where this book was insightful an helpful. Hubbard aserts that there isn't anything There is a lot of good information here but it is more of a text book and very dry. I read this book because I have to make decisions every day. Some decisions are very easy because I have the intell and facts that make the decision for me. But other decisions aren't so easy. What are my "real" risks? How do I separate emotion from a decision? What about all the things involved that can't be measured? Ah, that is where this book was insightful an helpful. Hubbard aserts that there isn't anything that can't be measured. Metrics. That is the key to making better decisions. The group I manage has a lot of dynamic and organic tasks to perform each day. I have never been able to quantify a lot of the work we do. That is because I am intrenched in scientific measurements such as average time to handle a customer call. That measurement is meaningless for me. Each call is a different subject. I cannot measure their performance based on how quickly they resolve a call because some problems are simple and others are complex and require enlisting other personnel. But Hubbard teaches many techniques and alternate ways to look at things to get some way of quantifying; perhaps not precisely, but enough to help navigate the myriad pieces of information that can go into a business decision. You have to "want" to read this book. But if you "want" to improve ROI; if you "want" to provide better risk analysis; "if you "want" to be more confident about providing management with your recommendations ... then you'll "want" to read this book.

  9. 5 out of 5

    Bibhu Ashish

    Happened to read the book from IIBA.org site where I have been a member since last year. The best takeaway from the book is the structural thought process it brings in while dealing with intangibles which we always are demotivated to measure. To summarize my learning, I would just mention the below which I have copied from the book. 1-If it's really that important, it's something you can define. If it's something you think exists at all, it's something you've already observed somehow. 2-If it's so Happened to read the book from IIBA.org site where I have been a member since last year. The best takeaway from the book is the structural thought process it brings in while dealing with intangibles which we always are demotivated to measure. To summarize my learning, I would just mention the below which I have copied from the book. 1-If it's really that important, it's something you can define. If it's something you think exists at all, it's something you've already observed somehow. 2-If it's something important and something uncertain, you have a cost of being wrong and a chance of being wrong. 3-You can quantify your current uncertainty with calibrated estimates. 4-You can compute the value of additional information by knowing the "threshold" of the measurement where it begins to make a difference compared to your existing uncertainty. 5-Once you know what it's worth to measure something, you can put the measurement effort in context and decide on the effort it should take. 6-Knowing just a few methods for random sampling, controlled experiments, or even merely improving on the judgments of experts can lead to a significant reduction in uncertainty. One caution though. People who are not that fond of Mathematics and data may find it bit too much, but this book is worth reading at least once.

  10. 4 out of 5

    Nathan

    This is a dense book. It took me several months to get through it, but that was partially because after the refresher on Bayesian Statistics I started reading another textbook on that. If you like math and numbers and analysis and have to make decisions, you'll get some useful information from this book. I built my first Monte Carlo model while walking through this. For years I've been asking friends "How confident are you?" when they give me a binary answer. Eg: Q: Will this be done by Friday? A: This is a dense book. It took me several months to get through it, but that was partially because after the refresher on Bayesian Statistics I started reading another textbook on that. If you like math and numbers and analysis and have to make decisions, you'll get some useful information from this book. I built my first Monte Carlo model while walking through this. For years I've been asking friends "How confident are you?" when they give me a binary answer. Eg: Q: Will this be done by Friday? A: Yes Q: How confident are you? A: 50% After reading this I've taken away the idea of always asking people for a 90% confidence interval. I think one of the most useful (and fun) parts of this book is about the calibration exercise. If you're asked 10 questions and told to provide a 90% confidence interval of where the true answer is then you should get 9 out of the 10 correct. I didn't on my first try, and most people are terrible at it. But apply money to the mix, and people instantly improve. This tip was immediately used in the next model I built :) Here are my notes: Although this may seem a paradox, all exact science is based on the idea of approximation. If a man tells you he knows a thing exactly, then you can be safe in inferring that you are speaking to an inexact man. —Bertrand Russell (1873–1970), British mathematician and philosopher Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations. A mere reduction, not necessarily elimination, of uncertainty will suffice for a measurement. Not only does a true measurement not need to be infinitely precise to be considered a measurement, but the lack of reported error—implying the number is exact—can be an indication that empirical methods, such as sampling and experiments, were not used (i.e., it’s not really a measurement at all). the key lesson is that measurements are more than you knew before about something that matters. A problem well stated is a problem half solved. —Charles Kettering (1876–1958), American inventor, holder of 300 patents, including electrical ignition for automobiles There is no greater impediment to the advancement of knowledge than the ambiguity of words. —Thomas Reid (1710–1769), Scottish philosopher If someone asks how to measure “strategic alignment” or “flexibility” or “customer satisfaction,” I simply ask: “What do you mean, exactly?” It is interesting how often people further refine their use of the term in a way that almost answers the measurement question by itself. Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. The only valid reason to say that a measurement shouldn’t be made is that the cost of the measurement exceeds its benefits. Usually, Only a Few Things Matter—But They Usually Matter a Lot In most business cases, most of the variables have an “information value” at or near zero. But usually at least some variables have an information value that is so high that some deliberate measurement effort is easily justified. what makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong. Ignorance is never better than knowledge. —Enrico Fermi, winner of the 1938 Nobel Prize for Physics Four Useful Measurement Assumptions: It’s been measured before. You have far more data than you think. You need far less data than you think. Useful, new observations are more accessible than you think. the first few observations are usually the highest payback in uncertainty reduction for a given amount of effort. In fact, it is a common misconception that the higher your uncertainty, the more data you need to significantly reduce it. Again, when you know next to nothing, you don’t need much additional data to tell you something you didn’t know before. A decision has two or more realistic alternatives. merely decomposing highly uncertain estimates provides a huge improvement to estimates. As the great statistician George Box put it, “Essentially, all models are wrong, but some are useful.” the subjective estimates of some persons are demonstrably—measurably—better than those of others. the ability of a person to assess odds can be calibrated—just like any scientific instrument is calibrated to ensure it gives proper readings. assessing uncertainty is a general skill that can be taught with a measurable improvement. we are simply not wired to doubt our own proclamations once we make them. I also asked experts who are providing range estimates to look at each bound on the range as a separate “binary” question. A 90% CI means there is a 5% chance the true value could be greater than the upper bound and a 5% chance it could be less than the lower bound. This means that estimators must be 95% sure that the true value is less than the upper bound. If they are not that certain, they should increase the upper bound until they are 95% certain. I sometimes call this the “absurdity test.” It reframes the question from “What do I think this value could be?” to “What values do I know to be ridiculous?” We look for answers that are obviously absurd and then eliminate them until we get to answers that are still unlikely but not entirely implausible. This is the edge of our knowledge about that quantity. Assumptions about quantities are necessary if you have to use deterministic accounting methods with exact points as values. You could never know an exact point with certainty so any such value must be an assumption. But if you are allowed to model your uncertainty with ranges and probabilities, you do not have to state something you don’t know for a fact. If you are uncertain, your ranges and assigned probabilities should reflect that. If you have “no idea” that a narrow range is correct, you simply widen it until it reflects what you do know—with 90% confidence. When it comes to assessing your own uncertainty, you are the world’s leading expert. Once calibrated, you are a changed person. You have a keen sense of your level of uncertainty.” It is better to be approximately right than to be precisely wrong. —Warren Buffett It is the mark of an educated mind to rest satisfied with the degree of precision which the nature of the subject admits and not to seek exactness where only an approximation is possible. —Aristotle For most problems in statistics and measurement, we are asking, “What is the chance the truth is X, given what I’ve seen?” Again, it’s actually often easier to answer the question, “If the truth was X, what was the chance of seeing what I did?” Bayesian inversion allows us to answer the first question by answering the second, easier question. When we examine our own behaviors closely, it’s easy to see that only a hypocrite says “Life is priceless.” any fair researcher should always be able to say that sufficient empirical evidence would change their mind. If it’s really that important, it’s something you can define. If it’s something you think exists at all, it’s something you’ve already observed somehow. If it’s something important and something uncertain, you have a cost of being wrong and a chance of being wrong.

  11. 5 out of 5

    Jon

    Simply put the first half of this is just awesome. As I listened to this via audio the second half is plagued by many formulas that doesn’t translate or understood well when listened to. The second half is also very heavily into statistics which could be a somewhat laborious read for some. The first half is very recommended as it goes into what it means to “measure” something and suggest some very fundamental questions regarding measuring. E.g.: What is it you want to have measured? E.g. what does Simply put the first half of this is just awesome. As I listened to this via audio the second half is plagued by many formulas that doesn’t translate or understood well when listened to. The second half is also very heavily into statistics which could be a somewhat laborious read for some. The first half is very recommended as it goes into what it means to “measure” something and suggest some very fundamental questions regarding measuring. E.g.: What is it you want to have measured? E.g. what does security mean for you? Why is this important for you? How much is this measurement worth to you? What do you know now about the problem now? Hubbard gives tools for solving problem e.g. the Fermi and the baysian toolbox that allows a rough estimation of practically anything. Hubbard also gives some very good pointers as to how you calibrate yourself to counteract psychological biases. If you read it, make sure you dedicate a good amount of time on the first half as imo, this is where most of the loot is located.

  12. 5 out of 5

    Rick Howard

    Douglas Hubbard’s "How to Measure Anything: Finding the Value of "Intangibles" is an excellent candidate for the Cybersecurity Canon Hall of Fame. He describes how it is possible to collect data to support risk decisions for even the hardest kinds of questions. He says that that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. He writes that this particular view of Douglas Hubbard’s "How to Measure Anything: Finding the Value of "Intangibles" is an excellent candidate for the Cybersecurity Canon Hall of Fame. He describes how it is possible to collect data to support risk decisions for even the hardest kinds of questions. He says that that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. He writes that this particular view of probability is called Bayesian and it has been out of favor within the statistical community until just recently when it became obvious that it worked for a certain set of really hard problems. He describes a few simple math tricks that all network defenders can use to make predictions about risk decisions for our organizations. He even demonstrates how easy it is for network defenders to run our own Monte Carlo simulations using nothing more than a spreadsheet. Because of all of that, "How to Measure Anything: Finding the Value of "Intangibles" is indeed a Cybersecurity Canon Hall of Fame candidate and you should have read it by now. Introduction The Cybersecurity Canon project is a “curated list of must-read books for all cybersecurity practitioners – be they from industry, government or academia — where the content is timeless, genuinely represents an aspect of the community that is true and precise, reflects the highest quality and, if not read, will leave a hole in the cybersecurity professional’s education that will make the practitioner incomplete.” [1] This year, the Canon review committee inducted this book into the Canon Hall of Fame: “How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen. [2] [3] According to the Canon committee member reviewer, Steve Winterfeld, "How to Measure Anything in Cybersecurity Risk” is an extension of Hubbard’s successful first book, “How to Measure Anything: Finding the Value of “Intangibles” in Business. It lays out why statistical models beat expertise every time. It is a book anyone who is responsible for measuring risk, developing metrics, or determining return on investment should read. It provides a strong foundation in qualitative analytics with practical application guidance." [4] I personally believe that precision risk assessment is a key and currently missing element in the CISO’s bag of tricks. As a community, network defenders in general are not good at transforming technical risk into business risk for the senior leadership team. For my entire career, I have gotten away with listing the 100+ security weaknesses within my purview and giving them a red, yellow, or green labels to mean bad, kind-of-bad, or not bad. If any of my bosses would have bothered to ask me why I gave one weakness a red label vs a green label, I would have said something like: “25 years of experience, Blah, Blah, Blah, Trust Me, Blah, Blah, Blah, can I have the money please?” I believe the network defender’s inability to translate technical risk into business risk with any precision is the reason that the CISO is not considered at the same level as other senior C-Suite executives like the CEO, the CFO, the CTO and the CMO. Most of those leaders have no idea what the CISO is talking about. For years, network defenders have blamed these senior leaders for not being smart enough to understand the significance of the security weaknesses we bring to them. But I assert that it is the other way around. The network defenders have not been smart enough to convey the technical risks to business leaders in a way they might understand. This CISO inability is the reason that the Canon Committee inducted "How to Measure Anything in Cybersecurity Risk,” and another precision risk book called “Measuring and Managing Information Risk: A FAIR Approach” into the Canon Hall of Fame. [5][4][3][6] [7]. These books are the places to start if you want to educate yourself on this new way of thinking about risk to the business. For me though, this is not an easy subject. I slogged my way through both of these books because basic statistical models completely baffle me. I took stat courses in college and grad school but sneaked through them by the skin of my teeth. All I remember about stats was that it was hard. When I read these two books, I think I only understood about a three-quarters of what I was reading not because they were written badly but because I struggled with the material. I decided to get back to the basics and read Hubbard’s original book that Winterfeld referenced in his review: “How to Measure Anything: Finding the Value of “Intangibles” in Business” to see if it was also Canon worthy. The Network Defender’s misunderstanding of Metrics, Risk Reduction and Probabilities Throughout the book, Hubbard emphasizes that seemingly dense and complicated risk questions are not as hard to measure as you might think. He reasons from scholars like Edward Lee Thorndike and Paul Meehl from the early twentieth-century about Clarification Chains: If it matters at all, it is detectable/observable. If it is detectable, it can be detected as an amount (or range of possible amounts). If it can be detected as a range of possible amounts, it can be measured. [8] As a network defender, whenever I think about capturing metrics that will inform how well my security program is doing, my head begins to hurt. Oh, there are many things that we could collect – like outside IP addresses hitting my infrastructure, security control logs, employee network behavior, time to detect malicious behavior, time to eradicate malicious behavior, how many people must react to new detections, etc. – but it is difficult to see how that collection of potential badness demonstrates that I am reducing material risk to my business with any precision. Most network defenders in the past, including me, have simply thrown our hands up in surrender. We seem to say to ourselves that if we can’t know something with 100% accuracy or if there are countless intangible variables with many veracity problems, then it is impossible to make any kind of accurate prediction about the success or failure of our programs. Hubbard makes the point that we are not looking for 100% accuracy. What we are really looking for is a reduction in uncertainty. He says that the concept of measurement is not the elimination of uncertainty but the abatement of it. If we can collect a metric that helps us reduce that uncertainty, even if it is just by a little bit, then we have improved our situation from not knowing anything to knowing something. He says that you can learn something from measuring with very small random samples of a very large population. You can measure the size of a mostly unseen population. You can measure even when you have many, sometimes unknown, variables. You can measure the risk of rare events. Finally, Hubbard says that you can measure the value of subjective preferences like art or free time or life in general. According to Hubbard, “We quantify this initial uncertainty and the change in uncertainty from observations by using probabilities.” [8] These probabilities refer to our uncertainty state about a specific question. The math trick that we all need to understand is allowing for ranges of possibilities that we are 90% sure the true value lies between. For example, we may be trying to reduce the number of humans that have to respond to a cyberattack. In this fictitious example, last year the Incident Response Team handled 100 incidents with three people each; a total of 300 people. We think that installing a next generation firewall will reduce that number. We don’t know exactly how many but some. We start here to bracket the question. Do we think that installing the firewall will eliminate the need for all humans to respond? Absolutely not. What about reducing the number to three incidents with three people for a total of nine. Maybe. What about reducing the number to 10 incidents with three people for a total of 30. That might be possible. That is our lower limit. Let’s go to the high side. Do you think that installing the firewall will have zero impact in reducing the number? No. What about 90 attacks with three people for a total of 270? Maybe. What about 85 attacks with three people for a total of 255? That seems reasonable. That is our upper limit. By doing this bracketing we can say that we are 90% sure that installing the next generation firewall will reduce the number of humans that have to respond to cyber incidents from 300 to between 30 and 255. Astute network defenders will point out that this range is pretty wide. How is that helpful? Hubbard says that first, you now know this where before you didn’t know anything. Second, this is the start. You can now collect other metrics perhaps that night help you reduce the gap. The History of Scientific Measurement Evolution This particular view of probabilities, the idea that there is a range of outcomes that you can be 90% sure about, is the Bayesian interpretation of probabilities. Interestingly, this different view of statistics has not been in favor since its inception when Thomas Bayes penned the original formula back in the 1740s. The naysayers originated from the Frequentists. Their theory said that the probability of an event can only be determined by how many times it has happened in the past. To them, modern science requires both objectivity and precise answers. According to Hubbard, “The term ‘statistics’ was introduced by the philosopher, economist, and legal expert Gottfried Achenwall in 1749. He derived the word from the Latin statisticum, meaning ‘pertaining to the state.’ Statistics was literally the quantitative study of the state.” [8] In the Frequentist view, the Bayesian philosophy requires a measure of “belief and approximations. It is subjectivity run amok, ignorance coined into science.” [7] But the real world has problems where the data is scant. Leaders worry about potential events that have never happened before. Bayesians were able to provide real answers to these kinds problems like the defeating of the Enigma encryption machine in World War II and finding a lost and sunken nuclear submarine that was the basis for the movie “Hunt for Red October.” But It wasn’t until the early 1990s when the theory became commonly accepted. [7] Hubbard walks the reader through this historical research about the current state in scientific measurement. He explains how Paul Meehl in the early 1900s demonstrated time and again that statistical models outperformed human experts. He describes the birth of Information Theory with Claude Shannon in the late 1940s and credits Stanley Smith Stevens around the same time with crystalizing different scales of measurement from sets, to ordinals, to ratios and to intervals. He reports how Amos Tversky and Daniel Kahneman, through their research in the 1960s and 1970s, demonstrated that we can improve our measurements around subjective probabilities. In the end, Hubbard defines measurement as this Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations. [8] Simple Math Tricks Hubbard explains two math tricks that, after reading, seem impossible to be true, but when used by a Bayesian proponents, greatly simplify measurement-taking for difficult problems. The Power of Small Samples: The Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. [8] The Single Sample Majority Rule (i.e., The Urn of Mystery Rule): Given maximum uncertainty about a population proportion—such that you believe the proportion could be anything between 0% and 100% with all values being equally likely—there is a 75% chance that a single randomly selected sample is from the majority of the population. [8] I admit that the math behind these rules escapes me. But I don’t have to understand the math to use the tools. It reminds me of a moving scene from one of my favorite movies: “Lincoln.” President Lincoln, played brilliantly by Daniel Day-Lewis, discusses his reasoning for keeping the southern agents, who want to discuss peace before the 13th Amendment is passed, away from Washington. "Euclid's first common notion is this. Things that are equal to the same thing are equal to each other. That's a rule of mathematical reasoning. It's true because it works. Has done and always will do.” [9] The bottom line is that statistically significant does not mean a large number of samples. Hubbard says that statistical significance has a precise mathematical meaning that most lay people do not understand and many scientists get wrong most of the time. For the purposes of risk reduction, stick to the idea of a 90% confidence interval regarding potential outcomes. The Power of Small Samples and the Single Sample Majority Rule are rules of mathematical reasoning that all network defenders should keep handy in their utility belts as they measure risk in their organizations. Simple Measurement Best Practices and Definitions As I said before, most network defenders think that measuring risk in terms of cyber security is too hard. Hubbard explains four rules of thumb that every practitioner should consider before they give up: It’s been measured before. You have far more data than you think. You need far less data than you think. Useful, new observations are more accessible than you think. [8] He then defines “uncertainty” and “risk” through a possibility and probabilistic lens: Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. Measurement of Uncertainty: A set of probabilities assigned to a set of possibilities. Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome. Measurement of Risk: A set of possibilities each with quantified probabilities and quantified losses. [8] In the network defender world, we tend to define risk in terms of threats and vulnerabilities and consequences. [10] Hubbard’s relatively new take gives us a much more precise way to think about these terms. Monte Carlo Simulations According to Hubbard, the invention of the computer made it possible for scientists to run thousands of experimental trials based on probabilities for inputs. These trials are called Monte Carlo simulations. In the 1930s, Enrico Fermi used the method to calculate neutron diffusion by hand with human mathematicians calculating the probabilities. In the 1940s, Stanislaw Ulam, John von Neumann, and Nicholas Metropolis realized that the computer could automate the Monte Carlo method and help them design the atomic and hydrogen bombs. Today, everybody that has access to a spreadsheet can run their own Monte Carlo simulations. For example, if you take my previous example of trying to reduce the number of humans that have to respond to a cyberattack. We said that during the previous year, 300 people responded to a cyberattack. We said that we were 90% certain that the installation of a next generation firewall would result in a reduction of the humans that have to respond to an incident to between 30 and 255 humans. We can refine that number even more by simulating hundreds or even thousands of scenarios inside a spreadsheet. I did this myself by setting up 100 scenarios where I randomly picked a number between 0 and 300. I calculated the mean to be 131 and the standard deviation to be 64. Remember that the standard deviation is nothing more than a measure of spread from the mean. [11][12][13] The rule of 68–95–99.7 says that 68% of the recorded values will fall within the first standard deviation. 95% will fall within the second standard deviation. 97.7% will fall within the third standard deviation. [8] With our original estimate, we said there was a 90% chance that the number is between 30 and 255. After running the Monte Carlo simulation, we can say that there is 68% chance that the number is between 76 and 248. How about that? Even a statistical luddite can run his own Monte Carlo simulation. Conclusion After reading Hubbard’s second book in the series, “How to Measure Anything in Cybersecurity Risk," I decided to go back to the original to see if I could understand with a bit more clarity exactly how the statistical models worked and to determine if the original was Canon worthy too. I learned that there was probably a way to collect data to support risk decisions for even the hardest kinds of questions. I learned that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. I learned that this particular view of probability is called Bayesian and it has been out of favor within the statistical community until just recently when it became obvious that it worked for a certain set of really hard problems. I learned that there are a few simple math tricks that we can all use to make predictions about these really hard problems that will help us make risk decisions for our organizations. And I even learned how to build my own Monte Carlo simulations to supports those efforts. Because of all of that, "How to Measure Anything: Finding the Value of "Intangibles" is indeed Canon worthy and you should have read it by now. Sources [1] "Cybersecurity Canon: Essential Reading for the Security Professional," by Palo Alto Networks, Last Viewed 5 July 2017, https://www.paloaltonetworks.com/thre... [2] "Cybersecurity Canon: 2017 Award Winners," by Palo Alto Networks, Last Visited 5 July 2017, https://cybercanon.paloaltonetworks.c... [3] " 'How To Measure Anything in Cybersecurity Risk' - Cybersecurity Canon 2017," Video Interview by Palo Alto Networks, Interviewer: Canon Committee Member, Bob Clark, Interviewees Douglas W. Hubbard and Richard Seiersen, 7 June 2017, Last Visited 5 July 2017, https://www.youtube.com/watch?v=2o_mA... [4] "The Cybersecurity Canon: How to Measure Anything in Cybersecurity Risk," Book review by Canon Committee Member, Steve Winterfeld, 2 December 2016, Last Visited 5 July 2017, https://cybercanon.paloaltonetworks.com/ [5] "How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen, Published by Wiley, April 25th 2016, Last Visited 5 July 2017, https://www.goodreads.com/book/show/2... [6] "The Cybersecurity Canon: Measuring and Managing Information Risk: A FAIR Approach," Book review by Canon Committee Member, Ben Rothke, 10 September 2015, Last Visited 5 July 2017, https://researchcenter.paloaltonetwor... [7] "Sharon Bertsch McGrayne: 'The Theory That Would Not Die' | Talks at Google," by Sharon Bertsch McGrayne, Google, 23 August 2011, Last Visited 7 July 2017, https://www.youtube.com/watch?v=8oD6e... [8] "How to Measure Anything: Finding the Value of "Intangibles" in Business," by Douglas W. Hubbard, Published by John Wiley & Sons, 1985, Last Visited 10 July 2017, https://www.goodreads.com/book/show/4... [9] "Lincoln talks about Euclid," by Alexandre Borovik, The De Morgan Forum, 20 December 2012, Last Visited 10 July 2017, http://education.lms.ac.uk/2012/12/li... [10] BITSIGHT SECURITY RATINGS BLOG," by MELISSA STEVENS, 10 JANUARY 2017, Last Visited 10 July 2017, https://www.bitsighttech.com/blog/cyb... [11] "Standard Deviation - Explained and Visualized," by Jeremy Jones, YouTube, 5 April 2015, Last Visited 9 July 2017, https://www.youtube.c

  13. 4 out of 5

    Paulo Saraiva

    To put it simple: the best book I ever read about risk management. If you want great and practical insights about what you need to measure when it comes to problem solving or decision making, this is a masterpiece. Here you will find a lot of Mathematical tools that are extremely useful in clarifying situations in which we use to think that there is no ways to perform objective measurement, specifically about what we usually call "intangibles". Even when it comes to psychology of decision making, To put it simple: the best book I ever read about risk management. If you want great and practical insights about what you need to measure when it comes to problem solving or decision making, this is a masterpiece. Here you will find a lot of Mathematical tools that are extremely useful in clarifying situations in which we use to think that there is no ways to perform objective measurement, specifically about what we usually call "intangibles". Even when it comes to psychology of decision making, Hubbard proposes pragmatic ways to translate some of the most acknowledged theories and models into Mathematical language. The chapters covering the Lens and Rasch models are particularly remarkable. A modern and combative stance against subjectivity that permeates most risk management tools that are widely used in organizations.

  14. 4 out of 5

    Kevin

    Tedious to read. Unless you are wanting a statistics course. I was looking for the theory, not the equations. I don't think the entirety of the book was worth the few nuggets I pulled out. The cliff notes amount to: measurement is about uncertainty reduction, not necessarily uncertainty elimination. Don't forego trying to measure something just because you know it won't be a perfect measurement. Is it a better measurement than what you're currently using? Will it be valuable in making a decision Tedious to read. Unless you are wanting a statistics course. I was looking for the theory, not the equations. I don't think the entirety of the book was worth the few nuggets I pulled out. The cliff notes amount to: measurement is about uncertainty reduction, not necessarily uncertainty elimination. Don't forego trying to measure something just because you know it won't be a perfect measurement. Is it a better measurement than what you're currently using? Will it be valuable in making a decision? How much is on the line in that decision? There was another chestnut he had about the animosity towards statistics: When people say that you can prove anything with statistics, they probably don’t really mean statistics. They just mean broadly the use of numbers, especially for some reason percentages. And they really don’t mean "anything" or "prove". What they really mean is that numbers can be used to confuse people. Especially the gullible ones lacking basic skills with numbers.

  15. 5 out of 5

    Jason

    A dense, hard to read book, but so worth it It’s been awhile since I read (and finished) a book so dense and complicated. It was worth it though as it changed so much of how I think about everything. From work and estimation with prioritization to all the data that is around us everyday. So. Very good.

  16. 5 out of 5

    Emil O. W. Kirkegaard

    Kind of an introduction to applied decision theory, with some good stuff about how to quantify things.

  17. 5 out of 5

    Daniel Hageman

    Fantastic book for anyone worried that our lack of certainty in measurement techniques implies a categorical inability to measure in principle.

  18. 5 out of 5

    Allison

    Lots of great commentary on why using data is important... his processes for measurement are less... interesting? A good read for data people. :)

  19. 4 out of 5

    Vlad Ardelean

    Oh boy, I've been waiting a long time to review this one. I'll start with the good parts, as they're few and far between. I've also posted this review directly from the kindle app twice already, and it doesn't show up, so this is my 3rd attempt to post a review for this book: The good parts: I learnt how to measure the population of fish in a lake. That's quite cool! I will not give a spoiler here, enough to say that it involves catching and tagging the fish. Then I learnt a few statistics factlets Oh boy, I've been waiting a long time to review this one. I'll start with the good parts, as they're few and far between. I've also posted this review directly from the kindle app twice already, and it doesn't show up, so this is my 3rd attempt to post a review for this book: The good parts: I learnt how to measure the population of fish in a lake. That's quite cool! I will not give a spoiler here, enough to say that it involves catching and tagging the fish. Then I learnt a few statistics factlets. For instance, in a normal distribution, 90% of the measurements will fit in the interval of +- 1.645 standard deviations (3.29 sigmas). I also learned how I can get ~95% confidence that if I ask 5 random people how long it takes them to get to work, the population median will be between the maximum and minimum of those 5 values...regardless of the size of the population. These are just statistical truths, no debate there. I also learnt about Emily Rosa who debunked the claims of "touch healing" therapists regarding them being able to detect auras... spoiler: they couldn't do it, or at least couldn't show they're better than tossing a coin. I learned about how Enrico Fermi was really good at estimation problems using just his available knowledge. I learnt about Eratosthenes, which estimated the radius of the Earth with quite high accuracy! It was fun. Other nice things in the book were mentions of the Rausch and Lens decision models, and Monte Carlo simulations for assisting in decisions. Then Daniel Kahneman (and some other ppl) are mentioned for contributions to psychology whereby they show consistent flaws in human thinking (we're very bad at estimating extremely rare events) There's some talk about Bayesian statistics compared to the "frequentist" interpretation. Another thing that surprised me was that the author talks at length about these magical people called "calibrated estimation experts". Apparently (and there's literature with more evidence for this to show), you can train yourself to give answers AND then the probability of the answer being right. For instance, I don't know when Napoleon was born, but I can say with 90% certainty that it was between 1750 and 1850. Apparently, you can train yourself to become very good at providing that probability. The author then provides a few tricks on how to better give a probability for "guessing" answers. This sums up the good parts of the book. I have not provided more details here, but rest assure you won't find much more details than this in the book. The bad parts: The author bashes and mocks people so much, it's unreal. He especially has a deep hate towards managers. Here's some "statistical" evidence: I counted the number of times the author wrote the word "managers" in the book. It's 79. Here are a few quotes, and they go on and on and on ...and on: "I heard managers say that since each product is unique, they cannot extrapolate..." "I have known managers who simply presume the superiority of their intuition..." "...it simply won't occur to many managers that an "intangible" can be measured" "...her examples prove what can be done by most managers if they tried" "...Other managers might object: "there is no way to measure that thing without spending millions" " "Once managers figure out what they mean and why it matters, the issues in questions starts to look a lot more measurable" "Business managers need to realize that some things seem intangible only because they just haven't defined what they are talking about" "The problem is that when managers make choices about whether the bother to do a random sample in the first place, they are making the judgments intuitively..." "But it has some significant advantages over much of the current measurement-stalemate thinking of some managers" Maybe not all mentions of the word "managers" have a directly bad connotation, but I'm quite sure none of those mentions put managers in a good light. There's more! The author uses another formula to mock people, and that is "those who...". I searched for usages of that formula -> 45. I won't quote, but I hope you got the idea. More bad things. Remember when I wrote about Emily Rosa and her debunking of supernatural powers? The author has an interesting fascination with coming back to her example. He does this 97 times in fact! With 410 pages, that gives a mention about Emily every 4.23 pages. Enrico Fermi and Eratosthenes get less attention with only 52 and 37 mentions throughout the book. Still, I think it's fair to say that repetition is an issue with this book. To top it off, the author has the arrogance to claim that with a book such as this one, Eratosthenes, Emily Rosa and Enrico Fermi would probably have been able to do a lot more. More bad things: The author claims that there are plenty of statistics books, and this is not one of them. He advertises his book as providing general ideas applicable everywhere. Among those ideas are things like "measurements help in making a decision", "there's always more information than you think you have", "you always need less information than you think you do", "measure the things that are most important" and "take into account if the price of the measurement is lower than the cost of the decision". Am I alone in thinking that these ideas are so trivial that a book about them is not really valuable? Also, since he's talking about decisions, he never mentions the time aspect of a measurement, just the price. You'd think he might maybe consider that, but nope! I don't understand who the target audience for this book is. Is it the "managers" the author continuously mocks? Not likely. Is it people who want to learn how to measure? Likely not also, because this book doesn't really teach any measurement techniques, it just mentions 3 decision making models which he barely explains. Even more bad: The author introduces the terms that I talk about in the "good" section. That's all he does, he "introduces" them. I did learn statistics while reading this book, but it's because I spent a lot of time on Wikipedia. The author doesn't try to rigorously explain these concepts. At most, you get from him recipes like this: 1. Note down the numbers you get from doing X 2. Take the average of those numbers 3. Subtract the average from each number 4. Multiply the difference by 1.645 5. etc etc (This is not an example from the book. This is just my impersonation of the author's examples. They are hard to follow on kindle. There are not enough explanations, and then you're just left with a recipe) Next to "not explaining complex concepts", the author also over-explains simple ones, again in a very repetitive fashion. There are a lot of unnecessary explanations regarding very simple graph. There's one graph illustrating the price of measurement versus the value of information. The price of measurement rises slowly at first, and increases fast when the amount of information approaches perfect information. The value of information is the opposite. It rises very much at first, but then only very slowly towards the maximum amount of information. I'm not sure how much time the author spends on this, but I did have the feeling that it's ridiculous, so I'm reporting on the incident. It's not the only incident like this. The ugly: This part is my personal interpretation of the author's intent, based on the book content. The author seems to emphasize quite a lot that he has a company who offers calibration training to people. Therefore I think, and so it seems to me, that at least in part, the motivation for this book was to self-advertise. This would be fair if stated up-front. It was not stated upfront though. The author might also have been using the "statistical" fact that one can charge more for longer books. Clever, but I'm asking for my money back on this one. DO NOT READ THIS BOOK! IT'S TOO LONG AND REPETITIVE TO BE A GOOD INTRODUCTORY BOOK, AND CONTAINS FAR TOO LITTLE INFORMATION FOR IT TO BE ANYTHING ELSE.

  20. 5 out of 5

    Stephen Rynkiewicz

    Classical Greeks not only figured out that the planet is round, but had it measured. Eratosthenes calculated its circumference from a lunch-hour measurement at his library in Alexandria during the summer solstice, knowing only his distance from the Tropic of Cancer. Eratosthenes is a hero of Chicago statistician Doug Hubbard, who trains managers in "calibrated estimates," basically closely observed ballpark figures. Here he describes approaches to making more accurate guesses, including when it' Classical Greeks not only figured out that the planet is round, but had it measured. Eratosthenes calculated its circumference from a lunch-hour measurement at his library in Alexandria during the summer solstice, knowing only his distance from the Tropic of Cancer. Eratosthenes is a hero of Chicago statistician Doug Hubbard, who trains managers in "calibrated estimates," basically closely observed ballpark figures. Here he describes approaches to making more accurate guesses, including when it's worth spending money to take out some of the guesswork. If you didn't get past introductory statistics in college, this is a useful guide to Monte Carlo simulations, Baysean inversion, crowdsourcing and other analytical concepts. Not only does Hubbard open up the black box of predictive modeling, but he also points to ways we can think about thinking: It's risky to rely on just gut instinct, but maybe we can trust our gut once we measure just how far to trust it.

  21. 4 out of 5

    Jeff Yoak

    This was a fantastic read. It helps with general numeracy as well as providing an overview on how to think about measurement and statistics practically. This is an area where I have some experience and I still learned a lot. This book, especially the first half, should be accessible to everyone. The second half is a bit more technical and I wished I had been reading in paper instead of in audio. I may do that eventually. The pacing is a little hard in audio and I could have benefited from notes, This was a fantastic read. It helps with general numeracy as well as providing an overview on how to think about measurement and statistics practically. This is an area where I have some experience and I still learned a lot. This book, especially the first half, should be accessible to everyone. The second half is a bit more technical and I wished I had been reading in paper instead of in audio. I may do that eventually. The pacing is a little hard in audio and I could have benefited from notes, but still... a great read and actively beneficial.

  22. 5 out of 5

    June Ding

    The title made me curious. The author did make the case that anything can be measured including many things that we consider abstract or intangible. The stories it gave at the start of the book is fascinating and opened my mind about what we think measurement really is. There is no perfect measurement. There is no absolute truth. Measurement is a quantitativly expressed reduction of uncertainty based on one or more observations. I also find the methods to define the problem and the notion that a The title made me curious. The author did make the case that anything can be measured including many things that we consider abstract or intangible. The stories it gave at the start of the book is fascinating and opened my mind about what we think measurement really is. There is no perfect measurement. There is no absolute truth. Measurement is a quantitativly expressed reduction of uncertainty based on one or more observations. I also find the methods to define the problem and the notion that a measurement has to support a decision is helpful.

  23. 5 out of 5

    Peter (Pete) Mcloughlin

    Fairly good business statistics book on measuring factors and on how to apply measurement and some good risk analysis. Definitely overhyped as a revolutionary booklet (I think this happens with business books a lot.) But it is accessible and gives some good advice on how to measure things statistically and using statistical methods for practical applications but it isn't the second coming.

  24. 4 out of 5

    Kc

    I purchased this book because I am in the middle of a project where I have to measure an "intangible". I liked the author's ideas on breaking down a measurement and figuring out the uncertainty factor on each variable. The information he provided helped me to find a solution for my project.

  25. 5 out of 5

    Pauli Kongas

    Perhaps not the best read in audio because of some math and a lot of pictures etc.

  26. 4 out of 5

    Hamish Shamus

    The only other book I've read which justified its length was Decisive by Dan and Chip Heath. Strategy: 1. Make a list of factors you think are relevant 2. Convert each of these features into a z-score 3. Add them up to get an overall ranking Before measuring something answer the following: 1. what decision does this support 2. what observable consequences does it have? 3. how does this matter to the decision? 4. what is current level of uncertainty? 5. what is the value of additional information? N The only other book I've read which justified its length was Decisive by Dan and Chip Heath. Strategy: 1. Make a list of factors you think are relevant 2. Convert each of these features into a z-score 3. Add them up to get an overall ranking Before measuring something answer the following: 1. what decision does this support 2. what observable consequences does it have? 3. how does this matter to the decision? 4. what is current level of uncertainty? 5. what is the value of additional information? Notes * You can estimate how many fish their are in a lake by catching some at random, tagging them, and throwing them back in. Repeat until you've recaught several fish and then do some statistics. * Amazon added the ability to add wrapping so that it would know how many people are buying things as gifts. Newspapers are given coupons so that retailers know what newspapers their customers read. * Look out for basic questions/measurements which might obviate any further investigation. * McNamara fallacy: "The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide." — Daniel Yankelovich "Corporate Priorities: A continuing study of the new demands on business." (1972) * Statistician David Moore said "if you don't know what to measure, measure anyway. You'll learn what to measure." This can also be characterised as a "measure first, ask questions later." school of thought. * To IQ skeptics: if we can measure a decrease in IQ due to lead poisoning, are you saying we should ignore this or that it isn't real. * The Rasch model gives you a way of assigning scores to different people who did different tests. Or something. You just have to add log odds. Or something. * Brunswick Lens model: look at how experts make decisions, model their decision making with a linear model, and the result will generally be at least as good (it removes inconsistency). If you give experts a bunch of real or made up instances and get them to predict labels, then you can create such a lens model. If you give experts the same instances several times you can estimate the error due to expert inconsistency (which will be removed by using the lens model). * Black-Scholes model is how to correctly price stock options * Scientometrics is something I should read into. Plus here's a quote from Night by Elie Wiesel which is in my notes for some reason: * 'At last he said in a weary voice "I've got more faith in Hitler than in anyone else. He's the only one who's kept his promises - all his promises - to the Jewish people."' - Night, Elie Wiesel

  27. 4 out of 5

    Albert

    The book is a very interesting one, that presents the premise that anything that needs to be assessed can be measured, in one form or another. Of course, there is a need to define/redefine what a measurement is. In this, the book is a fascinating look at the paradigm shift that needs to occur to perceive the world in a new way that allows it to be measurable. Many basic assumptions are challenged and revised in the process, which was actually neat. It brings a new perspective, which opens more p The book is a very interesting one, that presents the premise that anything that needs to be assessed can be measured, in one form or another. Of course, there is a need to define/redefine what a measurement is. In this, the book is a fascinating look at the paradigm shift that needs to occur to perceive the world in a new way that allows it to be measurable. Many basic assumptions are challenged and revised in the process, which was actually neat. It brings a new perspective, which opens more possibilities and opportunities. Towards the end of the book, it starts getting very math- and statistics-heavy, and necessarily so, to present the complete content of his methodology. Even if you end up getting lost in the math section (and that's frustratingly easy to do when someone is reading a math equation to you), the principles set forward help to accept the assertion that anything CAN be measured. Wow. I finished this audiobook, but the voice acting is SO bad that I spent the first third of the book getting used to listening to him, detracting from the concentration I had to pay towards the content. The voice is pretty identical to the announcer's from when we had to call on the phone to get movie showing times, just like the one that Kramer mocks in the Seinfeld episode. Another annoyance that I could not get over: The voice actor read, literally, over and over again, e.g., "i e" and "e g" instead of converting them to "that is" and "for example"... It made the book feel so stunted and the reading felt...dumb. There are books that are amenable to audiobook format, then there are books that just should not be made into audiobooks. This is such a one. Not only does it not work to have a mathematical equation read out to you, but there is additional information containing charts, graphs, and even test exams that are referenced and really should be consulted online while going through the book; this kind of defeats the purpose of an audiobook, it seems to me. But this is a problem with the book format, not the content. Because of the content, I am willing to give this 4 stars. But this should never have been made into an audiobook. The content of the book doesn't lend itself to it and the voice actor chosen should definitely find a different avenue of work. Again: Do NOT make the mistake I made and get an audio version of this book! Read it on paper instead!

  28. 4 out of 5

    Sundarraj Kaushik

    A nice book. A must read for sceptics like me who think there are many immeasurables in business. The key message the author gives is, instead of taking a path or avoid taking path because one does not find the right measurement, an attempt should be made to find out what can be measured to reduce the risk if the path is taken or not taken. This will help make a more sensible decision than just saying there are immeasurables. In short some information is better than no information. It is recommen A nice book. A must read for sceptics like me who think there are many immeasurables in business. The key message the author gives is, instead of taking a path or avoid taking path because one does not find the right measurement, an attempt should be made to find out what can be measured to reduce the risk if the path is taken or not taken. This will help make a more sensible decision than just saying there are immeasurables. In short some information is better than no information. It is recommended that one of the tools be leveraged to carry out the measurement with whatever data is available. 1. Monte Carlo 2. Markov Chains 3. Bayesian Probability and Bayesian Inversion for Ranges 4. Rasch Model 5. Lens Model 6. Simple sampling. The key is that the samples must be really random. 7. Brunswik's method 8. Daves Z Scale 7. Objective Model if Historical Data is available They myth that is dispelled in the book is that when you have a lot of uncertainity, you don't need too much data to reduce uncertainity significantly. Event a very small amount of relevant data will go a long way to reduce the uncertainity. Some of the issues that must be avoided are 1. Bandwagon effect 2. Halo effect 3. Choice blindness 4. Don't over measure. The theory of Diminishing Marginal Returns starts applying and only adds to the cost reducing the risk significantly. At a high level the steps outlined are 1. Define the decision and variables that matter to it 2. Model the current state of uncertainity about those variables 3. Compute the value of additional measurements 4. Measure the high-value uncertainities in a way that is economically justified 5. Make risk/return decision after the economically justified amount of uncertainity is reduced. A must read for all decision makers, which is all of us.

  29. 5 out of 5

    Lukasz Nalepa

    For a long time time now, I've heard recommendation from various people, that this book is really worth reading. It took me a while to grab it though, as it did not seem as very interested topic for me, but finally I decided to give it a try - I needed to think about some measures, and I hoped to find some inspiration and guidelines. Cutting through the chase - I feel deeply disappointed. I feel like this book is more about decission making and statistics (or probabilities) rathern than actual me For a long time time now, I've heard recommendation from various people, that this book is really worth reading. It took me a while to grab it though, as it did not seem as very interested topic for me, but finally I decided to give it a try - I needed to think about some measures, and I hoped to find some inspiration and guidelines. Cutting through the chase - I feel deeply disappointed. I feel like this book is more about decission making and statistics (or probabilities) rathern than actual measurements. There are some tough cases described there of course like measure of value of information, measure of risk, reasoning to measure value of human life - but not a measurement itself though. But for me it falls short, compariun to auditious title. I was inclined to give it two starts tops for almost all duration of reading, but the sumup of the book reminded me, that I actually had some take aways from this book. Most important for me personally, was a reminder that measurement is a way of decrising an uncertainty - there are no absolute measures, and by that fact only, everything can be measured at least a bit. The second nice take away, was an idea to use Monte Carlo simmulations to deal with ranges of unknowns (due to uncertainty). The third, and final for me, would be to employ some "statistic" tricks, while having a limited number of data. So overall: 2,5 (rounded up i guess) stars from me. Maybe it would be better, with more examples, more interesting narrative and far less statistics. Maybe the title should be: how to uese statistical methods on non-statistical problems. Than, it would be descriptive and I would definetely skip it without all that whining ;)

  30. 5 out of 5

    J Keefer

    This is a good layman's introduction to reducing uncertainty, especially in business problems. Hubbard makes a strong case for prioritizing measurement of even seemingly nebulous intangibles. The book is centered around a useful framework for tailoring uncertainty reduction to specific problems (p. 41-42; p. 266-270). As a result, I can see this book being a pillar for an enlightened manager to reference frequently. I found the first half of the book excellent. It was dense with intuitive takeawa This is a good layman's introduction to reducing uncertainty, especially in business problems. Hubbard makes a strong case for prioritizing measurement of even seemingly nebulous intangibles. The book is centered around a useful framework for tailoring uncertainty reduction to specific problems (p. 41-42; p. 266-270). As a result, I can see this book being a pillar for an enlightened manager to reference frequently. I found the first half of the book excellent. It was dense with intuitive takeaways (a few of which are included at the end of this review). I especially enjoyed the "calibration exercises" of chapter 5, which helped me better understand uncertainty and confidence intervals. In the second half, Hubbard discussed from a high level some of the statistical methods he uses, as well as some applications. As mass-market books on technical topics often do, it tried to find a balance between providing a high-level survey of the subject matter and giving some nuts-and-bolts details, but I don't think it succeeded. Unfortunately it did not employ the same clear exposition of the first half of the book (for example, I thought Chapter 10's discussion of Bayesian statistics was muddled and confusing compared to other treatments I've seen). Some choice quotes from the hardcover 2nd edition: P. 23: a measurement is "a quantitatively expressed reduction of uncertainty based on one or more observations." P. 27: If a trait matters at all, it is detectable, and therefore measurable. P. 28: "All measurements of interest to a manager must support a specific decision." P. 41: "Ignorance is never the moral high ground." P. 76: "Once calibrated, you are a changed person. You have a keen sense of your level of uncertainty."

Add a review

Your email address will not be published. Required fields are marked *

Loading...
We use cookies to give you the best online experience. By using our website you agree to our use of cookies in accordance with our cookie policy.