Tuesday, November 14, 2006

Berube ON FDA (A draft)

WHAT FOLLOWS IS A DRAFT OF A PART OF AN ARTICLE THAT I AM WRITING.

The first nanotechnologies that will have an impact on public perception with be those associated with medical care. Quite simply, necrophobia is a powerful fear, and drugs and devices which can delay death will be treated favorably. The primary source materials for the following came from Nakissa Sadrieh of the FDA’s Office of Pharmaceutical Science.

The first problem will be the “…difficult[y] for the FDA to maintain adequate scientific expertise in the field.” This has always been the case with the FDA given the wages available for its personnel should they decide to relocate into the business of pharmaceutical and medical innovation.

Strangely enough, two areas posing the greatest difficulties for the FDA are food supplements and cosmetics, two sets of products which have the weakest legislative history of the entire set of products which might be intuited as under the FDA’s purvey.

Recently, there have been some indications nanoscience will make intrusions into the supplement market according to Ann Dowling, of the Royal Society. There seems to be concern that supplements using nanoparticles are on the Woodrow Wilson inventory though I have not verified this claim at this time.

As we learned from the Magic Nano fiasco not all products claiming to include nanotechnology actually do.

The best example of a nano0supplement to date might be Neosino AG’s Nanosilimagna™ allegedly containing calcium, silicon, and magnesium in the form of nanoparticles. It is marketed as a nutrition supplement for athletes under the name Neosino Sport Nano-Liquid™. Both the German Sports Federation, footballer Roy Makaay, and biking champion Michael Themann endorse it. In addition, it is also used in cosmetics as Neosino Spray-Forte™. Having reviewed one of the studies on its efficacy, there seems little doubt that product claims have been hyperbolized. Of course, the supplements industry does not have to go through clinical trials as required for pharmaceuticals and it is unclear whether any toxicity testing was undertaken. The supplements market is enormous and we can expect more examples to surface.

We know that nanoparticles have been used for over a decade in cosmetics especially sunscreens. This observation was made by ICTG and seven other groups when they filed a petition with the FDA in 2006. They claimed there were at least 116 sunscreens, cosmetics and personal care products currently on the market and warned about the production of free radicals and dermal penetration especially through broken skin. Additional fears about the use of nanoparticles in cosmetics can be found on the Friends of the Earth website.

The ICTA petition calls for a formal opinion on nanotechnology especially in regards to the concept of equivalence and the fast lane approval process. In addition, concerns over health and safety serve as the basis for demands for definitions and characterization, nanomaterials toxicity testing paradigms, and labeling. In addition, the petition calls for nano-specific product regulation much like those advocated by Davies and calls upon the FDA to comply with the National Environmental Policy Act (NEPA). The NEPA can demand a programmatic environmental impact statement as part of its regulatory powers and could serve as the basis of future legal action.

The FDA can only regulate when they have been empowered to regulate and some areas such as food supplements and over-the-counter sunscreens may need legislative activity. Actually, the ICTA complaint demands the FDA amend the OTC sunscreen Drug Monograph to reconsider sunscreens with nanoparticles as “new drugs” which would require the manufacturers submit a “new drug application” (NDA).

New medical nanoproducts will probably be subject to the same rigorous review given current pharmaceutical drugs though the FDA may need additional training in understanding the differences between a medical device, a drug, a biological or chemical entity, especially when a product looks like food but acts like medicine.

It has been difficult determining exactly what the FDA has been doing. Consider the following anecdote from Rick Weiss of the Washington Post (I am not defending its validity at this time). Norris Alderson, director of the FDA, reported in 2004 “the agency had so far approved six nano-based products: two drugs, two medical devices, and two sunscreen lotions. But [when questioned] he did not know whether special safety tests had been required. When pressed for details, an agency representative called back to report that, in fact, no nano-based products have been approved. No explanation for the confusion was offered.” This was a little embarassing. Clearly things have improved. Nonetheless, it is unsurprising that there have been many concerns expressed about the FDA’s regulatory authority and efforts.

These concerns led the 2004 Swiss Re report to conclude that both the FDA in the USA and the EC’s Scientific Committee on Cosmetic Products and Non-Food Products intended for consumers have not established viable hazard guidelines. This complaint was amplified in the previously mentioned petition.

Even without explicit guidelines per se, the FDA seems to have the machinery and a track record that might make them especially viable as a government regulator for many nano-products though the breadth of their current mandate, especially considering advances at the nanoscale, might make their mission unmanageable.

In general, the FDA has a set of centers that deal with particular types of products. For example while the Center for Drug Evaluation and Research (CDER) is responsible for drugs, the Center for Devices and Radiological Health (CDRH) deals with devices. This brings us to the first issue. There are times when it becomes problematic determining when a nanoproduct is a drug, a device, or even a biologic because it has characteristics of all three classes.

Following on the heals of the FDA Modernization Act on 1997, two initiatives, the Tissue Reference Group in 1998 and the Device Action Plan in 1999, helped streamline jurisdictional issues. Today, when there is a question about jurisdiction, the Office for Combination Products resolves it. To help, the FDA has a Nanotechnology Interest Group (NTIG) with representatives from each center. Generally, the decision is based on the primary mode of action of the combination product. A ruling on the primary role of action of a combination product has been defined as “the most important therapeutic action” and this definition has been published in the Federal Register. In addition, the NTIG will facilitate inter-center communications. Kudos to the FDA.

Both devices and materials used in vivo undergo rigorous trials. For drugs, it begins with an Investigational New Drug (IND) application which generally involves small-scale clinical studies and animal testing. The next step requires clinical studies which follow a NDA. There are three phases. In Phase I, safety data is obtained from human subjects. Phase II generates data on effectiveness. Phase III expands on phase I and II by increasing the sample of subjects. A NDA review follows.

Here’s a review of some action to date.

Using technology developed by Elan Drug Delivery, Wyatt, Merck, Abbott, and American Pharmaceutials all developed nanoparticulate drugs and received FDA approval. Elan developed a milling technique which increases the bioavailability of some of these companies' products.

The IND-NDA three phase process described above was the route taken by Merck, Wyeth and American Pharmaceuticals. The FDA approved Abraxane™ in January 2005 for treatment of breast cancer after failure of combination chemotherapy for metastatic disease or relapse within 6 months of adjuvant chemotherapy. Abraxane™ is touted in the literature as the first approved nano-drug per se.

Merck proceeded with Emend®, the nanoparticulate drug aprepitant. Wyeth brought their drug sirolimus to Elan for development of a nanoparticulate formulation of sirolimus called Rapamune® which successfully obtained approval as well.

Instead of pursuing the clinical trail route, a company may elect to claim bio-equivalence as was done by Abbott in the case of TriCor® which was claimed as equivalent to fenofibrate. This seems the best place to begin the discussion on new versus existing.

In terms of devices, NanoOss® is a nanocrystaline version of hydroxyatite that is less likely to crack, hence is “has the strength of steel.” NanoOss® was considered as a Class II device and Angstrom Medica got FDA approval.

Devices are classified into three categories by the CDRH. Class I devices are subject to general controls and include band-aids and crutches. Class II devices need to show substantial equivalence, much like bio-equivalence above and include items like wheelchairs and tampons. Class III devices involve the “introduction of a new material, procedure, or device without a substantial equivalent in the marketplace.” “In the eyes of the FDA, NanoOss® was just calcium phosphate.” Hence, we see another instance when a nanoproduct is viewed as equivalent to its more bulk chemical equivalent for purposes of regulation.

Another device, CardioMEMS’ EndoSure Wireless AAA Pressure Measurement System, was designed to treat an aneurism of the lower abdominal aorta and it received FDA approval in February 2004. According to CardioMEMS’ CEO Davis Stern had to prove the materials it used were bio-compatible and stable. As a Class III device, clinical tests began in March 2004 in Brazil and eventually included populations in Argentina, Canada, and the USA.” If there is an area that the ETC Group should begin to question, it might be the use of developing countries' citizens [thought I would not classify Argentina and Brazil as developing countries per se] as test subjects in the approval process.

Immunicon received FDA approval for its CellSearch Circulating Tumor Cell Kit, Immunicon CellTracks AutoPrep System, and Immunicon CellSpotter Analyzer Cell Analyzer using magneto-nanoparticles called ferrofluids in August 2004.

In another case reported by Small Times’ David Forman, AcryMed’s product known as SilvaGard® got approval. SilvaGard® is an antibacterial nanoparticle coating for medical devices. As such, AcryMed’s customers file for FDA approval rather than AcryMed. In December 2005, I-Flow Corp.’s ON-Q SilverSoaker® antimicrobial catheter using SilvaGard® was approved.

The FDA cites that it has traditionally regulated many products with particulate materials in the nano-size range and that the current pharmacotoxicity tests are sufficiently adequate for most nanotechnology products that it will regulate.” In case, where current tests are inadequate, the FDA can require the petitioner to meet higher burdens of proof.

The FDA also has authority for post-market surveillance. While some consumer confidence has eroded after the heart damage problems associated with Vioxx and other Cox 2 inhibitors, Congress is anticipating internal reforms and there is some support for new legislation as well.

ANY CRITICAL COMMENTS ARE WELCOME.

ON Michael Taylor's Regulating the Products of Nanotechnology: Does FDA Have the tools it needs? - WEAKLY RECOMMENDED

Michael R. Taylor, Regulating the Products of Nanotechnology: Does the FDA Have the Tools it Needs, October 2006. http://www.nanotechproject.org/82/10506-regulating-the-products-of-nanotechnology.

A recent Woodrow Wilson Pew Emerging Technologies project article by Maryland academic and former FDA official Michael Taylor added very little to the debate over FDA regulation of drugs and devices. I found the sections on cosmetics much more interesting and found the sections on food less so.

However, maybe that is because I read too much on this subject because pp. 8-10 summarize a series of recommendations which will introduce the neophyte to important concerns such as revision of the OTC sunscreen monograph, standards for substantiation data, and new or not new. As such, I find myself recommending sections of this article.

Everyone knows the FDA is missing tools for regulating cosmetics like it does drugs. And everyone knows the FDA needs a bigger budget if we are going to ask it to treat all nanoproducts it regulates as "new" rejecting the bio-equivalence fast lane. We know after the Cox 2 fiasco we probably need post-market oversight for all drugs. We also know when the FDA negotiates the trial procedures with a manufacturer it can demands as much or as little information as needed.

For me, here are the core questions:

1. Should the FDA be given more responsibilities or is it time to break it into smaller entities to consider different issues?

For example, instead of pouring more money into the FDA and then expect it to balance pre-market and post-market oversight, it might be better to separate those functions to different entities altogether. We learned a chilling lesson when we found the Atomic Energy Commissions overwhelmed and while hardly ideal, a fair case can be made that the DOE and NRC functioning separately contributed to the public trust. Maybe it's time to examine whether the FDA is simply overwhelmed.

If you catalogue some of Thomas' recommendations, you begin to sense the behemoth (I purposely chose not to use the word leviathan, but I really wanted to) he is building. Data will need to be checked and a major portion of his recommendation deal with data authority. On another level, he calls for early warning information collection and regulatory research in addition for a major increase in FDA staff along with major budgetary enhancements.

Maybe, nano is the proverbial straw and we should consider examining structural reform as well as implementing legislation and budget enhancements. It might be time to consider food, drugs/devices, and cosmetics to be regulation by the USDA, FDA, and CPSC and allowing each of these entities to develop expertise specific to their functions when it comes to products on the nanoscale coming under their jurisdiction.

It may be time to consider a major reorganization of the FDA.

2. Should all nano-formulations (drugs) and sometimes drug/devices be treated as new for purposes of regulation?

Yes and no. It depends. We have heard repeatedly that the same reason nanoscale is interesting to the pharmaceutical industry may be the same reason it is toxicologically suspect. When a formulation can cross formerly impenetrable barriers, work its way into cells themselves, etc., concern lies with what happens if the formulation finds its way into cells, organelles, etc., where it is not needed and potentially toxic. In cases, where the formulation is nano-ized and can be more effective due to heightened bioavailability and is short-lived or even soluble, then maybe not. However, in situations where there is no viable treatment regimen, a patient might elect for some more speculative treatment involving nano-formulations. As such, regulatory overkill might threaten development and that would be unfortunate.

New or Not New is an important question, but the question might be better articulated as Deal or No Deal. Truth be told, we are discussing a exercise which demands a net assessment.

3. Should the burden to demonstrate safety be precautionary (in a strict sense)?

While the burden on industry to prove a drug is safe and effective was a positive development (Kefauver Amendemments), the degree of safety and efficacy needs to be set at a level which the industry can meet. Taylor claims (p. 17) "it is not clear that existing animal toxicity-testing protocols, on which we try to assess the safety of most chemicals, can be used i ntheir present form to assess engineered nanomaterials." Then, what does the industry do? Do they develop new protocols first? If so, how are they vetted? Who does the vetting? How will clinicals trials proceed? Though not a PETAite, does this require more animal trials? If every formulation is new, what does this mean in terms of experimental subjects, animals and human?

On p. 22, Taylor wrote: "Most companies take their safety responsibilities very seriously." Regulation help companies plan. Liability compels industry to take their pre-market safety review very seriously. I am left unsatisfied by the Taylor article when I try to determine what constitutes "early and adequate information". If the FDA wants information regarding drugs in the proverbial pipeline, what does it plan to do with all that information given the number of false starts in the drug research business. In terms of post-market oversight and the "possibility of unanticipated safety problems", what constitutes a threshold of sufficient oversight and given the "unanticipatedness" of the events what does the FDA plan to do (predict the unpredictable). Does a company get good manufacturing pracitces seal of approval? And do we want this done by a government entitty at all?

Then we get to the section of cosmetics (pp. 27-30). And I have recommended this section repeatedly to inquiries about regulation over cosmetics. It is well written and concise. However, it does not address the risk issue at all. At the FDA Public Meeting (October 10) at which I attended and spoke, it was painfulkly apparent after hearing the testimony from Michael Roberts, University of Queensland, School of Medicine who made a powerful case that fears about transdermal penetration was grossly exagerrated.

The section dedicated to food in interesting as well (pp. 30-39)but if you want to know more about nano and its intersection with food watch for work from Jennifer Kuzma at Minnesota and the Michigan State team with Larry Busch and John Stone.

This paper ends with a series of recoomendations based on the assumption that a monolithic FDA aggregating and concatenating more and more data would be sufficient and up to the task. I am not sure that case has been made though many of the specific recommendations, like criteria for newness and a reintepretation of the OTC sunscreen monograph are no brainers.

Saturday, November 11, 2006

PUBLIC RISK PERCEPTION – A PRIMER

Here is what happened. As I was writing all of this and told people what I was up to (since it seemed like I disappeared), they asked for copies of this. It is appearing (in part) in the form in a grant submission. I will write it up into an article but that will need to follow two chapters for my new book I have promised my agent.

By the way I have new articles in Nanotechnology Perceptions (Collegia Basilea) coming out on the "Magic Nano" fiasco and another in Nanotechnology Law and Business Journal on a regulatory alternative to Davies' comprehensive reform [there is an interesting top third of that article on risk perception as well] and there is a chapter in Patrick Lin's edited volume I did with NYU Law student Chris Dickson on "Rhetoric of Stakeholding." So I wasn't slacking off.

Enough of all that, here is the Primer on Risk Perception in the 21st Century in 4 Parts.



Part one

PUBLIC RISK PERCEPTION – A PRIMER


Kahan, Slovic, Braman, and Gastil made the case for risk perception research this year. “The study of risk perception [is] a policy science of the first order…. [N]o one who aspires to devise procedures that make democratic policymaking responsive to such information can hope to succeed without availaing [themselves] of the insights this field has to offer” (pp. 1071 & 1072).

“Danger is real, but risk [about chemicals] is socially constructed” (Slovic 1999, p. 689) and the public is anxious and confused. They believe contamination is greater now than even and many believe it can never be too expensive to reduce the risks associated with chemicals (Kraus, Malmfors & Slovic 1992, p. 220).

Scholarship in this area has been divided into two main camps: a psychometric approach (Fischoff, Slovic, Lichtenstein, & Combs 1978) and a cultural approach (Douglas & Wildavsky 1982). The cultural approach includes a world views approach associated with Douglas and Wildavsky and an indeological views approach with Dake (1991), elite groups approach with Rothman and Lichter (1987), and a cultural cognition approach (Kahan, Slovic, Braman & Gastil 2006). Others, especially Sjöberg, claim none of these theories can adequately explain risk determinations and variance (1998) and a combination of these theories may explain best the phenomenon of risk perception.

According to Sjöberg, “[r]isk perception by the public can be said to be built upon a kind of meta-judgment of risk, i.e. their judgment of what the experts say” (1999a, p. 6). The information is decoded by the public using an algorithm that was not used by the experts when encoding the information. Research tends to support the conclusion that the public has a more multidimensional risk perception when many qualitative factors enter into their determinations.

Chauncey Starr (1969) found that voluntariness of exposure was the key mediator in risk perception with other characteristics such as familiarity, equity, level of knowledge, risks to future generations, etc. important as well. The primary variables relate to personal and scientific knowledge but also include a set of heuristics and biases. Individual characteristics, such as past experience with the hazard or specific technical knowledge can affect the importance of some dimensions and result in quite different judgment of risks (Salvadori et al 2004). Other studies on public risk perception have identified biases such as catastrophic potential, vividness of the effects, personal susceptibility (Slovic, Fischhoff, & Lichtenstein 1979, Flynn, Slovic, & Mertz 1993, Sparks & Shepherd 1994, and Kletz 1996). Other factors include outrage, stigma, dread, and a list of biases such as affect, availability, loss aversion, status quo partiality, post-decisional regret aversion, etc.

Alhakami & Slovic (1994) found acceptability generally increased with increased benefit unless the risk was low (p. 1091) leading one to conclude “…it might be possible to change perceptions of risk by changing perceptions of benefits…” (p, 1096). This has led to public relations like campaign touting the benefits of nanotechnology. Festooned with hyperbole and establishing false expectations coupled with the release of hardly sensational applications, such as pants and bowling balls, this approach is proving overly optimistic.

Experts complain: “The greatest risk to the public’s health may be it own risk assessment…. The same mechanisms that cause members of the public to form exaggerated perceptions of risk will also prevent them from processing scientifically sound information in a rational way” (Kahan, Slovic, Braman & Gastil citing Sunstein 2006, pp. 1080 & 1081).

People interpret a given set of facts about risk with a host of variables. They are not irrational to them. This matrix of variables, axiologies of values and beliefs, are supplemented by biases, epiphanies, prior experience, and so forth. Slovic attributes the public’s reaction to risks “…to a sensitivity to technical, social, and psychological qualities of hazards that were not well-modeled in technical risk assessments” (1993, p. 675) and a revolution in risk assessment design does not seem forthcoming at this time.

Realistically, most citizens do not have access to scientific information upon which to make risk decisions. Others do not have the inclination. This has led some experts to advance public science education as a solution. Aggressive public science education, white meritorious for many reasons, remains insufficient. “Scientific literacy and public education are important, but they are not central to risk controversies” (Slovic 1999, p. 689) because the public does not seem to accord extraordinary weight to technical analyses (Jenkins-Smith & Silva, 1998). While there may be a subtle link between the two, “…the link between technical knowledge and perceived risk [by the public] is at best variable” (Johnson 1992).

Moreover, the public seems particularly vulnerable to the maximin or minimax bias (Berube 2000). Low-probability-high-consequence events are exaggerated. For example, events associated with mortality or morbidity occurring within a few days and were thus more noticeable are assumed more risky than the same or more instances spread over a longer period of time. This phenomenon has been associated with probability neglect whereby the public focuses on the worst case scenario. These scenarios can be exaggerated by the media. “Many studies have found that this public perception is heading weighted in favor a catastrophic accidents. However, this is largely due to news media coverage, which gives infinitely more attention to low-probability-high-consequence events than to frequently occurring, unspectacular or even undetectable events which accumulatively do must more damage to human health” (Cohen 1985, p. 2). The consequences of this set of biases lead “citizens…to support expensive preventative measures, however remote the risk and however cost-ineffective the abatement procedures” (Kahan, Slovic, Braman & Gastil 2006, p. 1077).

Furthermore, tampering with nature which includes such aspects as immoral risk, human arrogance, and interference with the processes of nature seems to be a dominant bias as well (Sjöberg 1999a, Sjöberg 2002) in risk estimation. In general, this ecological fallacy is demonstrated when the same chemical appearing in nature is assumed more risky when produced by an industrial process (Slovic et al 1995, McDaniels 1997). Tampering with nature seemed to be very relevant in terms of biotechnology and food related risk perception (Slovic et al 1995, p. 662) for many reasons not the least of which are dependency on food, noxious food hazards may not be apparent, unfamiliarity with scientific labels, and so forth (Frewer, Howard, & Shepherd, 1997, Fife-Schaw & Rowe, 1996).

EXPERTS AND THE PUBLIC

Part two

EXPERTS AND THE PUBLIC

Although claims that expert judgment is more veridical than the public’s are not examined here, the majority of the research indicates the public determine risks differently from experts (for a minority view, see Rowe & Wright, 2001). In addition, Sjöberg (1999b) claimed people are not that misinformed about all risks and he cites some studies showing convergence of expert and public opinion (Wyler, Masuda & Holmes 1968) which was specific to illnesses. Nonetheless, these findings are in the minority.

The vast majority of studies document differences between experts and the public in risk perception (Slovic, Fischhoff, & Lichtenstein 1980, Slovic 1987, Kraus, et al 1993, & Slovic et al 1995). For example, we know the public ranks some risks higher, such as chemical products (Kraus, Malmfors, & Slovic 1992 and Slovic et al 1995), radioactive waste disposal (Kletz 1996), and spray cans (Slovic 1987). On the other hand, the public ranks some risks lower than experts, such as X-rays (Slovic, Fischhoff, & Lichtenstein 1979 & Slovic 1987), downhill skiing (Savadori, Rumiati, & Bonini 1998), and bicycles (Slovic 1987).

While Drottz-Sjöberg and Sjöberg (1991) argue differences may exist before scientists receive their education (scientists self-select themselves out of the public), they admit socialization of values, conformity pressures, and familiarity may still be at work.

In general, experts seem to pay more attention to probability while the public is concerned about consequences. Overall, public risk judgments are less closely related to fatalities than those made by experts. Hyperbolically, Renn went as far as claiming probability plays hardly any role at all (2004).

Of course, hazard experts disagree among themselves as well. Kahan, Slovic, Braman & Gastil 2006, argue “…disagreements among risk experts are distributed in patterns that cannot plausibly be linked either to access to information or capacity to understand it” (p, 1093). They claim cultural worldviews, such as political ideology and institutional affiliation may account bias expert judgment as well (Slovic 1995, p. 662). As such, it has been argued that experts may screen arguments to protect their existing beliefs.

The primary rationale seems to be that experts rationalize hazards against dosage and exposure. The public does not. For example, “[t]he public would have more of an all or none view of toxicity… [T]hey appear to equate even small exposures to toxic or carcinogenic chemical with almost certain harm” (Kraus, Malmfors & Slovic 1992, pp. 217 & 228) despite well-documented hormesis effects to some chemicals. MacGregor, Slovic and Malmfors reported “…people reserve the term exposure for substantial contact or contact sufficient to cause cancer” (p. 653).

When Weinstein (1987) studied public sensitivity to chemicals in food the statement: When some chemical is discovered in food, I don’t want to hear statistics, I just want to know if it’s dangerous or not elicited strong agreement from 62% and moderate agreement from 21.6% of the respondents. While the contagion effect between product lines might not be defensible (Berube unpublished manuscript), there is clearly a contagion or cascade effect in terms of food. “[E]ven a minute amount of a toxic substance in one’s food will be seen as imparting toxicity to the food; any amount of carcinogenic substance will impart carcinogenicity, etc.” (Kraus, Malmfors & Slovic, 1992, p. 229). As well put elsewhere, when a young child drops a lollipop on the floor, the brief contact with dirt causes the parent to throw it away rather than washing it off and returning it to the child. Evidence like this lead MacGregor, Slovic and Malmfors to conclude “…somewhat subtle changes in how the concept of exposure is conceptualized and communicated evokes very different inferences about its meaning” (1999, p. 652) and offers the risk communicator opportunities to ponder.

Additional sources for disagreement between experts include the open ended nature of scientific claims. Science is almost never definitive. In addition, knowledge building often involving legalistic and technocratic debates over findings and this may be disadvantageous to public groups by increasing confusion, engendering panic, etc. Altogether, this is interpreted by the public as disagreement which increases uncertainty (Kajanne & Pirttilä-Backman 1999) and uncertainty impacts trust (see below).

There is much disagreement on if, why, and how experts and the public use different tools to perceive risk as well. For example, “the assertion that experts’ risk perception is driven by objective data and risk assessments and somehow more simple than that of the public is based on a small sample of experts studies by Slovic and colleagues (See Slovic, Fischhoff, & Lichtenstein 1979) in the end of the 70s.” Sjöberg added: “…the frequent assertion of simplistic structure in experts’ risk perception is an urban myth” (1999a, p. 8). The myth is due to the psychometric model which has come under attack (Sjöberg, L. (2000a). More research seems to be in order.

To add another level of complexity, “…merely mentioning the possible adverse consequences (no matter how rare) of some product or activity could enhance their perceived likelihood and make them appear more frightening” (Slovic 1986, p. 405) such as what occurred with high voltage lines and cellular telephones. Consequently, “many risk communications about chemical exposure may lead more often to confusion or heightened concerns, when it is actually intended to reduce concerns” (MacGregor, Slovic & Malmfors, p. 654). This becomes increasingly problematical as experts respond with more and improved risk assessment studies. Slovic (1986) warned merely mentioning possible adverse consequences could make them appear more frightening and even warned “…risk-assessment studies tend to increase perceived risk” (1993, p. 680) suggesting great care and this phenomenon needs to be taken in studying public opinions and attitudes and in designing risk messages for the public.

Our policy makers are not better equipped to determine public perception of risk at this time. “[W]hen politicians were asked to estimated that they believe was the public’s risk perception, they made gross errors” (Sjöberg 1999a, p.5). Oddly, this may be due to the input they got from active and concerned citizens rather than the public. Milbrath explained that those who were active and take part in the process, who are able and willing to give of their time and energy, are quite unrepresentative of the public at large (1981, p. 480). Unless great care is taken in risk perception research, it can be counterproductive when it makes a public view more salient, increasing its influence when the view is an unrepresentative generalization.

There is the additional problem associated with framing (Scheufele & Lewenstein, 2005). Framing refers to the idea that the way information is presented rather than the content itself can have an important impact on how audiences perceive the information. Modes of presentation can differ in terms of terminological choices, visual cues, or other factors (Scheufele, 1999). The "Frankenfood" label used during the GMO debate is good example of a frame that may directly impact risk perceptions among a public that does follow scientific rules of decision making.

Outside of nuclear energy, few studies have been undertaken dealing with a phenomenon like nanotechnology. Moreover, earlier risk studies suffered from a radiophobic bias (fear of things nuclear including bombs). Some recent studies in biotechnology and use of chemicals associated with food offer better guidance. For example, in 2004, Salvadori et al studied expert against public perceptions on biotechnology in Italy. In general, they found the experts significantly and systematically perceived less risk than the public. In addition, they noted higher perceived risks when the biotechnology involved food-related rather than medicine related applications. Most interesting was their conclusion that expert and non-expert differences may be affected by the nature of the hazard. They added “…public perception of risk…could be reduced by providing information about benefits….” Unfortunately, they also observed some perceptions, including those of experts, “…could be increased by providing information on harmful effects and negative consequences” (p. 1298) suggesting a complex dynamic is at work.

Slovic, Fischhoff and Lichtenstein supposed: “Attempts to characterize or compare risk, set safety standards, and make risk decisions will founder in conflict if policy makers insist, as they often have, on the narrow definition of risk as a conditional probability of dying” (1985, pp. 92-93). We need to understand better the processes involved rather than to institute experiments which may not be grounded in relevant research findings. We need to develop more refined techniques for representing uncertainty and data sets associated with traditional risk assessment models. “..[R]isk assessment [may have] been oversold because of the need to rationalize decisions about chemicals” (Neil, Malmfors & Slovic 1994, p. 201).

Finally, the window of opportunity remaining for risk communicators to engage the public is closing quickly as nanotechnology products are marched out onto the market. We know “…risk and benefit judgments of the hazards were found to be more strongly negatively correlated under time pressure” (Finucane 2001) and as such, it may behoove us to provide opportunities and methodologies to facilitate public engagement, sensibly and soon.

ON SOURCES AND TRUST

Part three

ON SOURCES AND TRUST


The public cannot be told they are safe. This top-down mode of communication is not sufficiently effective, especially when the source of the message may not be sufficiently trustworthy and the subject is exotic, such as invisible nanoparticles as a component of other products. Slovic (1993) attributed the divisiveness of controversies surrounding risk management and the failure of risk communication to date to a lack of trust and reported that distrust was “strongly linked to risk perception and to political activism to reduce risk” (p. 676). Trust can be useful. Salvadori et al reported: “[t]rust helps us reduce uncertainty to an acceptable level and to simplify decisions” (p. 1290). Unfortunately, trust is fragile as well. “It is created slowly, but can be destroyed instantly. In addition, when it comes to winning trust, the playing field is not level: it is tilted toward distrust” (Salvadori et al, p. 1291). While others are less absolute than Salvadori, they still advise caution. For example, Sjöberg (2001) claimed “…general trust add very little to the explanatory power of trust” (p. 193) suggesting that “specific trust is a more powerful construct than general trust for explaining risk perception (p. 195),” one tailored to the case instant. This suggests an important contextual foundation may be necessary in trust research to reduce overgeneralizing findings.

Without exception, we know the public has less trust in experts associated with industry than those associated with academia. Unsurprisingly, Barke and Jenkins-Smith discovered that experts’ risk perception seemed to be correlated with employers’ interests (1993). Indeed, industrial toxicologists tend to report risks lower than their colleagues in academia. The public believes experts may know less than they claim and may be corrupt due to their being hired by the industry or government. In addition, it may be perceived that those who are primarily involved in an activity associated with a risk, like science, may rate it lower (Sjöberg 2002). Corroboratively, Sjöberg reported “experts probably trust industry, agencies and other experts more than the public does” (1999b, p. 5).

The public also notices experts disagree and this generates uncertainty. While the public tends to use trust to compensate for their lack of technical understanding of risk issues (Jenkins-Smith & Silva, 1998), this becomes problematic with expert generated uncertainty.

There remains some disagreement whether trust is a single variable. It may be a variable bundle composed of many different variables working together to establish trust. On the other hand, there are some inconsistencies in the findings. For example, Sjöberg (2001) found the relationship between trust and risk perception was weak to moderate while admitting it “is more important for individual consumer behavior” (p. 190). Little about trust may be self-evident.

Trust varies from culture to culture. For example, trust in government entities seems to have weakened in the UK following the mishandling of information about BSE meat in the United Kingdom and dioxin contamination of dairy and poultry products in Belgium and the Netherlands (Salvadori et al 2004). For years, it was assumed the public’s trust in American government regulators was substantially higher. Very recent research on public trust and regulation of technology completed by Hart Research Associates (2006) on an American sample found trust has been eroding and European and American samples are not as divergent as previously reported. In some instances, the Hart research noted specificity of trust as in the examples of cosmetics and sunscreens using nanoparticles.

Kahan, Slovic, Braman & Gastil believed “…the people [the public] trust, not surprisingly, are the ones who share their cultural worldviews…” (p. 1085). As such we need to determine how to communicate with the public on this level. “[T]he generation of culture-independent forms of trust particularly between lay person and risk experts, may be the most valuable feature of genuinely democratic policymaking…. And one of the most important conditions of such trust, research shows, is the perception that officials have consulted and are responsible to affected members of the public” (Kahan, Slovic, Braman & Gastil 2006, p. 1104). Nonetheless, Slovic cautioned: openness and involvement, “…however, is no guarantee of success” (p. 680).

“Risk assessment, though invaluable to regulators in the design of management strategies, is not at all convincing to the public” (Kraus, Malmfors & Slovic 1992, p. 230). We have learned as well that “…communicating the results of risk assessment to the public relies heavily on language rather than numbers” (MacGregor, Slovic & Malmfors, p. 658), hence we need to examine risk perception qualitatively as well as trust needs to be unbundled and examined against a case instant, such as a nanotechnology related product or product line.

SOURCES OF INFORMATION

Part four

SOURCES OF INFORMATION


The social amplification of risk framework describes how both social and individual factors act to amplify or dampen perceptions of risk and through this create secondary effects such as stigmatization of technologies, economic losses or regulatory impacts. While many variables can amplify and attenuate risk messages, most of the focus has been on media (Pidgeon, Kasperson & Slovic, 2003, Kasperson et al, 1988).

We would hope the mass media would play a watchdog role (Siebert, 1956) overemphasizing certain risks or aspects of an issue in order to raise public awareness before potential negative impacts can occur. Unfortunately, many researchers including Jasanoff (1993) comment “…the public has a distorted view of risk because the media portray science in an inaccurate way, with exaggerated accounts of uncertainty and conflict” (123). The media reports “gripping instances of misfortune, whether or not representative of the activities that give rise to them” (Sunstein, p. 125) “often without enough facts to alleviate the possible fears they cause” (Wahlberg & Sjöberg 2000, p. 34). The complex motivation for accentuation is mostly propelled for economic self-interest and the drive for increased readership and viewership especially given recent competition from new media resources.

Regrettably, “…members of the public appear to be more willing to believe risk increasing signals than risk-decreasing signals, regardless of who provides the signal.” Jenkins-Smith & Silva continued. “…[T]hose who make claims that risks are large will be likely to have greater impact on public acceptance than those who make claims that risks are small” (pp. 118 & 199). They recommended the development and maintenance of general credibility of the scientific process and the scientific integrity of organizations and scientists undertaking risk assessments, a meritorious effort but not necessarily sufficient given the expert uncertainty discussion above.

The media (including movie and television drama) has been the primary scapegoat when it comes to risk policy dilemmas and bad news is trust destroying. The media may even be able to generate risk cascades. Sunstein discussed availability cascades describing events becoming available to large numbers of people from media coverage and group membership. This can lead to moral panics whereby large numbers perceive sources of danger far out of proportion such as “dissidents, foreigners, immigrants, homosexuals, teenage gangs” (p. 98), etc.

The public gets information from somewhere and Salvadori et al (2004) indicated that newspapers and TV are among the most trusted sources of information (in Italy) about food-related hazards, followed by medical sources, the government, friends, industry, magazines and radio, university scientists and consumer organizations. On the other hand, Salvadori et al also found newspapers and TV were frequently cited as mistrusted sources (see also Frewer, Howard, Hedderly & Shepherd 1996). Credibility is important to both trust and risk perception. It has repeatedly been discovered high credibility of information source, like trust in risk management, is inversely correlated to risk perception (Finucane, Alhakami, Slovic & Johnson 2000).

Sources of information, in terms of amounts of information, have begun to shift. An Economist cover recently lamented the death of the newspaper. Multiple articles have bemoaned how television news has emphasized hyperbole and entertainment and decreased its information content significantly. While baby boomers still get some of their news from traditional sources, many do not and generations X and Y and onward simply do not.

We know the traditional media serves to attenuate and amplify messages associated with risk and the data sets for these conclusions were drawn from studies of newspapers and television news broadcasts. By and large, this research also preceded the proliferation of satellite and cable television outlets and even more importantly the World Wide Web. Denying the importance of non-traditional sources of news information about data used in public risk perception is not useful.

A recent study found half of consumers turn to network television for breaking urgent news, 42 percent rely on radio, about a third look to local newspapers or cable news outlets, and a quarter use the Internet sites of print and broadcast media… Asked which sources of news they expect to rely on in the future, 52 percent said they will "primarily" or "mostly" trust traditional news sources over emerging sources, and 35 percent said they expect to confer "equal trust" on both types of news outlets. Thirteen percent said they expect to put more trust in emerging sources (Burns 2006), so the swing seems underway.

These emerging sources are called new media. They seem to be here to stay and while the power of the voices of bloggers may be exaggerated at times, the combined voices of wikipedia, blogs (written and video), podcasts (audio and video), and IPTV (YouTube) are affecting how news information is communicated to the public. Most importantly, as news media becomes more self-selectively personalized, readers will be able to ignore information that contrasts with pre-existing judgments. Sunstein worried about this self-sustaining cycle of one-sided information. Describing social cascades, Sunstein attested the smallest of triggers can produce large effects and worries about a major event in history being triggered by unbalance or even misinformation (2001).

A Pew Internet and American Life project estimated 11 percent (or 50 million) of Internet users are blog readers. 1 million blogs are updated daily. Calacanis of Weblogs predicted by 2009, 50 percent of the country will be blogging. The size of the blogosphere doubles every five months. Perseus Development Corporation reported 90 percent of blogs are authored by people between the ages of 13 and 29, with 51 percent between the ages of 13 and 19. While today only 3 percent of Internet users read them daily, among the young (18-29) that percentage rises to 44 percent (McGann 2004)

In terms of Wikipedia and after only two years, Wales and Sanger’s Wiki-model had over 100,000 English articles. After three years, it exceeded 200,000 in English with 500,000 in 50 languages. In February 2004, articles were added at 2,000 a day. Today, there are 100 active language versions of Wikipedia and in August 2005, 12,750 readers visited Wales and Sanger site daily. In terms of accuracy, a recent Nature reported comparable accuracy of Wikipedia to The Encyclopedia Britannica for articles on scientific subjects (Giles 2005).

While the podcast base is currently 4.5 million, it will grow to 60 million by 2011. About 22 percent of iPod users are aware of and consuming podcasts. eMarketer estimated there will be 3 million active podcast listeners by the end of this year and 7.5 million by 2008. Diffusion Group predicted 11 million by 2008. NPR reported to have had 18 million of their podcasts downloaded since August 2006 (Forrester Report 2006).

IPTV (internet protocol TV) supported sliver TV is coming. You Tube, the best known sliver TV, has daunting statistics. In July 2006, YouTubes viewers were "watching more than 100 million videos per day on its site” (Bogatin 2006) and the fare is not solely amateur rock videos, the site is loaded with video-blogs and news commentaries. Today (October 11, 2006), nanotechnology gets 60 hits while science receives over 11,000. So much so, its creators sold You Tube to Google in mid-October 2006 for over $1 billion. The day Google bought You Tube Google’s stock value increased by $4 billion. Cable modems work as miniature TV broadcast and reception stations, receiving data from one sliver of a shared TV channel and transmitting it on another. Anyone with a digital video camera and a broadband connection can broadcast. Individuals will capitalize on this and broadcast their own specialty news forums with broadband connections. Think of thousands of internet channels of specialized news. In the US, we have over100 million broadband users and millions of potential producers today.

While traditional media, such as newspapers and TV stations, have added web-based adjuncts, those tend to emphasize standard format and are used to emphasize published or broadcasted features and some current events. Digital versions of newspapers and mpg4 rebroadcasts of news shows seem to be the current model

REFERENCES for PRECEDING FOUR POSTS

These are references for the preceding (though temporally following) four posts. As I mentioned at the beginning of tihs, I was writing a grant proposal and needed to summarizes the field for colleagues outside the field. So having taught RISK COMMUNICATION for too many years, I put this togther and thought it would be useful to you and others.

I plan to publish a version of this in the near future.

REFERENCES

Alhakami, A. S. & Slovic, P. (1994). A psychological study of inverse relationship between perceived risk and perceived benefit. Risk Analysis, 14, 1085-1096.

Barke, R. P. & Jenkins-Smith, H. C. (1993). Politics and scientific expertise: Scientists, risk perception, and nuclear waste policy. Risk Analysis, 13, 425-439.

Berube, D. (2006). Nano-hype: The Truth Beyond the Nanotechnology Buzz, Amherst, NY: Prometheus Books.

Berube, D. (2000). Debunking mini-max reasoning. Contemporary Argumentation and Debate, 21, 53-73.

Bogatin, D. (2006). YouTube the video star, Act II. ZDNet, July 17, http://blogs.zdnet.com/micro-markets/?p=252, (accessed October 11, 2006).

Burns, E. (2006). Blogs Suffer in Poll On Preferred News Sources. ClickZ Network. October 3. http://clickz.com/showPage.html?page=3623588. (accessed October 5, 2006).

Cohen, B. (1985). Criteria for technology acceptability. Risk Analysis, 5:1, 1-3.

Cohen, G. L., Sherman, D. K, McGoey, M., Hsu, L., Bastardi, A. & Ross, L. (2005). Bridging the partisan divide: Self-affirmation reduces ideological close-mindedness and inflexibility, September 10. http://research.yale.edu/culturalcognition/documents/cohen_self_affirmation_draft.pdf, (accessed October 3, 2006).

Cultural Cognition Project, Yale Law School, National Risk and Culture Survey. October 3, 2006, http://research.yale.edu/culturalcognition/content/view/45/89/, (accessed October 3, 2006).

Dake, K. (1991). Orienting dispositions in the perception of risk. Journal of Cross-Cultural Psychology, 22, 61-82.

Douglas, M. & Wildavsky, A. (1982). Risk and Culture, Berkeley, CA: University of California Press.

Drottz-Sjöberg, B.-M. and L. Sjöberg. 1991. Attitudes and conceptions of adolescents with regard to nuclear power and radioactive wastes. Journal of Applied Social Psychology 21, 2007-2035.

Einsiedel, E. (2005). In the Public Eye: The Early Landscape of Nanotechnology among Canadian and U.S. Publics, August 5, http://www.azonano.com/Details.asp?ArticleID=1468. (accessed October 8, 2006).

Fife-Schaw, C. & Rowe, G. (1996). Public perception of everyday food hazards: A psychometric study. Risk Analysis, 16. 487-500.

Finucane, M. L. (2001). Public perceptions of risk. Oregonians for Rationality, http://www.o4r.org/pf_v6n2/Risk.htm. (accessed May 18, 2005).

Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, D. M. (2000). The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making, 13, 1-17.

Fischoff, B, Slovic, P., & Lichtenstein, S. (1979). Weighing the risks: Which risks are acceptable?” Environment, 2, 17-20, 32-38 reprinted in P. Slovic. (2000). The Perception of Risk, London: Earthscan Publication, Ltd.121-136.

Fischoff, B, Slovic, P., Lichtenstein, S., & Combs, B. (1978). How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits. Policy Studies, 9, 127-152.

Flynn, J., Slovic, P., & Mertz, C. K. (1993). Decidedly different: Expert and public views of risks from a radioactive waste repository. Risk Analysis, 13, 643-648.

Forrester Report. (2006). Forester Report on Podcasting. April 6. http://www.podtech.net/?=510. (accessed October 5, 2006).

Frewer, L. J., Howard, C., Hedderly, D., & Shepherd, R. (1996). What determines trust in information about food related risk? Underlying psychological constructs. Risk Analysis, 16, 473-486.

Frewer, L. j., Howard, C., & Shepherd, R. (1997). Public concerns in the United Kingdom about general and specific applications of genetic engineering: Risk benefit and ethics. Science, Technology and Human Values, 22, 98-124.

Giles, J. (2005). Internet encyclopedias go head to head. Nature, 438, 900-901. http://www.nature.com/naturejournal/v438/n70707/full/438900a.html. (accessed June 9, 2006).

Hohenemser, C. Kates, R. W. & Slovic, P. (1983). The nature of technological hazard. Science, 220, 378-384 reprinted in P. Slovic. (2000). The Perception of Risk, London: Earthscan Publication, Ltd. 168-181
Jasanoff, S. (1993) Bridging the two cultures of risk analysis. Risk Analysis, 13, 123-129.

Jenkins-Smith, H. C. & Silva, C. L. (1998). Reliability Engineering and System Safety, 59, 107-122.

Johnson, B. (1992). Advancing understanding of knowledge’s role in lay risk perception. http://www.piercelaw.edu/risk/vol4/summer/johnson.htm. (accessed October 6, 2006).

Kahan, D. M., Slovic, P., Braman, D. & Gastil, J. (2006). Fear of democracy: A cultural evaluation of Sunstein on risk. Harvard Law Review, 119, 1071-1109.

Kanjanne, A. & Pirttilä-Backman, A. (1999). Laypeople’s viewpoints about the reasons for expert controversy regarding food additives. Public Understanding of Science, 8, 303-315.

Kapferer, J. N. (1989). A mass poisoning rumor in Europe. Public Opinion Quarterly, 53, 467-481.

Kasperson, R. E., Ortwin, R., Slovic, P., Brown, H., Emel, J., Goble, R. L., Kasperson, J. X., & Ratick, S. J. (1988). The social amplification of risk: A conceptual framework. Risk Analysis, 8, 177-187.

Kletz, T. A. (1996). Risk – Two views: The public’s and the experts. Disaster Prevention Magazine, 5, 41-46.

Kraus, N., Malmfors, T., & Slovic, P. (1992). Intuitive toxicology: Expert and lay judgments of chemical risks. Risk Analysis, 12, 215-231.

MacGregor, D. G., Slovic, P. & Malmfors, T. (1999). How exposed is exposed enough? Lay inferences about chemical exposure. Risk Analysis, 19, 649-659.

McGann, R. (2004). The blogosphere by the numbers. ClickZ Network. November 22. http://clickz.com/stats/sectors/traffic_patterns/. (accessed

McDaniels, T. L., Axelrod, L. J., Cavanagh, N. S. & O’Riordan, T. (1997). Perception of ecological risk to water environments. Risk Analysis, 17, 341-352.

Milbrath, L. W. (1981). Citizen surveys as citizen participation mechanisms. The Journal of Applied Behavioral Science, 17, 478-496.

Neil, N., Malmfors, T., & Slovic, P. (1994). Intuitive toxicology: Expert and lay judgments of chemical risks. Toxicologic Pathology, 22:2, 198-201.

Pidgeon, N., Kasperson, R. E., & Slovic, P., eds. (2003). The Social Amplification of Risk, Cambridge, UK: Cambridge UP.

Renn, O. (2004). Perception of risk: socio-psychological models. Consumer Voice, March, p. 4.

Rothman, S. & Lichter, R. (1987). Elite ideology and risk perception in nuclear energy. American Political Science Review, 81, 383-404.

Rowe, G. & Wright, G. (2001). Differences in expert and lay judgments of risk: Myth or reality? Risk Analysis, 21. 341-356.

Savadori, L., Savio, S., Nicotra, E., Rumiati, R., Finucane, M., & Slovic, P. (2004) Expert and public perception of risk from biotechnology. Risk Analysis, 24:5, 1289-1299.

Savadori, L., Rumiati, R., & Bonini, N. (1998). Expertise and regional differences in risk perception: The case of Italy. Journal of Psychology, 57, 101-113.

Siebert, F. S. (1956). The Libertarian Theory. In F. S. Siebert, T. Peterson, & W. Schramm (Eds.), Four theories of the press (pp. 39-71). Urbana, IL: University of Illinois Press.

Sjöberg, L. (2002). The allegedly simple structure of experts’ risk perception: An urban legend in risk research, Science, Technology, & Human Values, 27, Autumn, 443-459.

Sjöberg, L. (2001). Limits of knowledge and the limited importance of trust. Risk Analysis, 21. 189-198.

Sjöberg, L. (2000a). Consequences matter, risk is marginal. Journal of Risk Research, 3, 287-295.

Sjöberg, L. (2000b). Perceived risk and tampering with nature. Journal of Risk Research, 3, 353-367.

Sjöberg, L. (1999a). Political decisions and public risk perception. A paper read at the Third International Public Policy and Social Science Conference, St. Catherine’s College, Oxford University, UK, July 28-30, 1999.

Sjöberg, L. (1999b). Risk perception by the public and by experts: A dilemma in risk management. Human Ecology Review, 6, 1-9.

Sjöberg, L. (1998). World views, political attitudes and risk perception. Risk: Health, Safety & Environment, 9, Spring. 137-152.

Slovic, P. (1999). Trust, emotion, sex, politics, and science: Surveying the risk-assessment battlefield. Risk Analysis, 19. 689-701.

Slovic, P. (1993). Perceived risk, trust, and democracy. Risk Analysis, 13. 675-682.

Slovic, P. (1987). Perception of risk. Science, 236, 280-285.

Slovic, P. (1986). Informing and educating the public about risk. Risk Analysis, 6, 403-415.

Slovic, P., Fischhoff, B., & Lictenstein, S. (1985). Characterizing perceived risk. In Perilous Progress: Managing the Hazards of Technology. R. Kates, C. Hohenemser & J. Kasperson, eds. Boulder: Westview Press. 91-125.

Slovic, P., Fischhoff, B., & Lichtenstein, S. (1979). Rating the risks. Environment, 21, 14-20, 36-39.

Slovic, P., Malmfors, T., Krewski, D., Mertz, C. S., Neil, N., & Bartlett, S. (1995). Intuitive toxicology II: Expert and lay judgments of chemical risks in Canada. Risk Analysis, 15, 661-675.

Sparks, P. & Shepherd, R. (1994). Public perception of the potential hazards associated with food production and food consumption: An empirical study. Risk Analysis, 14, 799-806.

Starr, C. (1969). Social perception versus rational risk. Science, 165, p. 1232.

Sunstein, C. R. (2004). Risk and Reason: Safety, Law, and the Environment. NY: Cambridge UP.

Sunstein, C. R. (2001). The daily we. The Boston Review. Summer. http://www.boston.review.net/BR26.3/sunstein.html. (accessed June 15, 2006)

Wahlberg, A. A. & Sjöberg, L. (2000). Risk perception and the media. Journal of Risk Research, 3, 31-50.

Weinstein, N. D. (1988). Attitudes of the Public and the Department of Environmental Protection Toward Environmental Hazards. Final Report, Trenton, New Jersey, Division of Science and Research, New Jersey Department of Environmental Protection.

Wyler, A. R., Masuda, M. & Holmes, T. H. (1968). Seriousness of illness rating scale. Journal of Psychsomatic Research, 11, 363-374.

Tuesday, November 7, 2006

Berube is back and still feisty.

For good or bad, I have returned and will begin to post again. I had a problem with a lost computer on an airline flight and found myself trying to make sense of a re-budget job done on a grant I was PI on (had some surgery on my back (nothing serious) but other folks messed with the budget and I ended up without release time while administering it. Yeah, sure!

And, I am submitting a NIRT proposal which is due on the 15th of November. It's on intuitive toxicology (an homage to Paul Slovic). After it is together and submitted, I will post my summary of research in risk perception.

On the 13th, ICON is rolling out its Current Practice report completed by a team from UCSB and while I am only the lowly Communication Director, it has kept me busy.

I also am working on a new book which is keeping me distracted and will have a chapter out in a new book (in the chapter I ridicule the use of the work stakeholder in the nano debate) and will have an article out on the Magic Nano fiasco.

Still no excuse. I plan on giving my point of view of publications and reports and want to apologize for being lackdaisical (as it seems) when it came to blogging.