Reliability & Validity in Research> Formative Validity & Summative Validity

## What is Formative Validity?

In a broad sense, formative validity**shows that a measure that improves a program, assessment or theory.**The idea is that a measuring tool (e.g. a test or procedure) should give you some useful, formative information about the direction you want to take.

In **outcomes assessment**, formative validity is a measure of how well a procedure gives useful information that improves what’s being assessed. For example, let’s say you were developing a rubric for assessing your mathematics students this year. You test your students general math knowledge and find they are lacking algebra skills. That test has formative validity, because it has the potential to improve your rubric (in this example, you might want to give less emphasis on algebra testing and more emphasis on building basic algebra skills).

A **general theory** can also have formative validity. Lee and Hubona (2009) state that in order to have formative validity, a theory must involve “…data obtained through random or representative, rather than biased, sampling” (p.246). In grounded-theory research, the theory is only formative if the constructs or variables are grounded in present data (rather than based on prior research or theories).

## Summative Validity

Lee & Hubona (2009) define formative theory in terms of a theory-building process; This leads to *summative *validity, which is the end result of the building process. Summative validity is only achieved when it has been tested empirically using the logic of “modus tollens.” In simple terms, modus tollens is: *if p implies q, and q is false, then p is false.* In other words, if a theory implies a result “p”, and that result is negated, then the theory is tossed out as well.

**References**:

Allen, M. (2004) Assessing Planning and Implementation, in Assessing Academic Programs in Higher Education. California State University, Institute for Higher Learning. Anker Publishing Company.

Lee, AS & Hubona, GS. (2009). A scientific basis for rigor in Information Systems research. MIS Quarterly, 33(2), 237-262.

**Need help with a homework or test question?** Chegg offers 30 minutes of free tutoring, so you can try them out before committing to a subscription. Click here for more details.

If you prefer an **online interactive environment** to learn R and statistics, this *free R Tutorial by Datacamp* is a great way to get started. If you're are somewhat comfortable with R and are interested in going deeper into Statistics, try *this Statistics with R track*.

*Facebook page*.