28 stories
·
1 follower

Trump’s Second Term Would Look Like This

1 Comment

Ever since the U.S. Senate failed to convict Donald Trump for his role in the January 6 insurrection and disqualify him from running for president again, a lot of people, myself included, have been warning that a second Trump term could bring about the extinction of American democracy. Essential features of the system, including the rule of law, honest vote tallies, and orderly succession, would be at risk.

Today, however, we can do more than just speculate about how a second Trump term would unfold, because the MAGA movement has been telegraphing its plans in some detail. In a host of ways—including the overt embrace of illiberal foreign leaders; the ruthless behavior of Republican elected officials since the 2020 election; Trump allies’ elaborate scheming, as uncovered by the House’s January 6 committee, to prevent the peaceful transition of power; and Trump’s own actions in the waning weeks of his presidency and now as ex-president—the former president and his allies have laid out their model and their methods.

Begin with the model. Viktor Orbán has been the prime minister of Hungary twice. His current tenure began in 2010. He is not a heavy-handed tyrant; he has not led a military coup or appointed himself maximum leader. Instead, he follows the path of what he has called “illiberal democracy.” Combining populist rhetoric with machine politics, he and his party, Fidesz, have rotted Hungarian democracy from within by politicizing media regulation, buying or bankrupting independent media outlets, appointing judges who toe the party line, creating obstacles for opposition parties, and more. Hungary has not gone from democracy to dictatorship, but it has gone from democracy to democracy-ish. Freedom House rates it only partly free. The International Institute for Democracy and Electoral Assistance’s ratings show declines in every democratic indicator since Fidesz took power.

[Jacob Heilbrunn: Behind the American right’s fascination with Viktor Orbán]

The MAGA movement has studied Orbán and Fidesz attentively. Hungary is where Tucker Carlson, the leading U.S. conservative-media personality (who is sometimes mentioned as a possible presidential contender), took his show for a week of fawning broadcasts. Orbán is the leader whom the Conservative Political Action Conference brought in as a keynote speaker in August. He told the group what it loves to hear: “We cannot fight successfully by liberal means.” Trump himself has made clear his admiration for Orbán, praising him as “a strong leader and respected by all.”

The U.S. is an older and better-established democracy than Hungary. How, then, could MAGA acolytes emulate Orbán in the American context? To simplify matters, set aside the possibility of a stolen or contested 2024 election and suppose that Trump wins a fair Electoral College victory. In this scenario, beginning on January 20, 2025, he and his supporters set about bringing Budapest to the Potomac by increments. Their playbook:

First, install toadies in key positions. Upon regaining the White House, the president systematically and unabashedly nominates personal loyalists, with or without qualifications, to Senate-confirmed jobs. Assisted by the likes of Johnny McEntee, a White House aide during his first term, and Kash Patel, a Pentagon staffer, he appoints officials willing to purge conscientious civil servants, neutralize or fire inspectors general, and ignore or overturn inconvenient rules.

A model for this type of appointee is Jeffrey Clark. A little-known lawyer who led the Justice Department’s environmental division, he secretly plotted with Trump and the White House after the 2020 election to replace the acting attorney general and then use the Justice Department’s powers to pressure officials in Georgia and other states to overturn Joe Biden’s victory. Only the threat of mass resignations at the Justice Department derailed the scheme.

Trump has plenty of Jeffrey Clarks to choose from, and a Republican-controlled Senate would confirm most or all of them. But no matter if the Senate balks or if Democrats control it. Trump will simply do more, much more, of what he raised to an art in his first term: appointing “acting” officials to circumvent Senate confirmation—a practice that, the Associated Press reports, “prompted muttering, but no more than that, from Republican senators whose job description includes confirming top administration aides.”

Second, intimidate the career bureaucracy. On day one of his second term, Trump signs an executive order reinstating an innovation he calls Schedule F federal employment. This designation would effectively turn tens of thousands of civil servants who have a hand in shaping policy into at-will employees. He approved Schedule F in October of his final year in office, but he ran out of time to implement it and President Biden rescinded it.

Career civil servants have always been supervised by political appointees, and, within the boundaries of law and regulation, so they should be. Schedule F, however, gives Trump a new way to threaten bureaucrats with retaliation and termination if they resist or question him. The result is to weaken an important institutional safeguard against Trump’s demands to do everything from harass his enemies to alter weather forecasts.

Third, co-opt the armed forces. Having identified the military as a locus of resistance in his first term, Trump sets about cashiering senior military leaders. In their place, he promotes and installs officers who will raise no objection to stunts such as sending troops to round up undocumented immigrants or intimidate protesters (or shoot them). Within a couple of years, the military will grow used to acting as a political instrument for the White House.

Fourth, bring law enforcement to heel. Even more intimidating to the president’s opponents than a complaisant military is his securing full control, at long last, over the Justice Department.

In his first term, both of Trump’s attorneys general bowed to him in some respects but stood up to him when it mattered most: Jeff Sessions by recusing himself from the Russia investigation and allowing a special counsel to be appointed; Bill Barr by refusing to endorse Trump’s election lies and seize voting machines. Everyday prosecutions remained in the hands of ordinary prosecutors.

That now changes. Trump immediately installs political operatives to lead DOJ, the FBI, and the intelligence and security agencies. Citing as precedent the Biden Justice Department’s investigations of the January 6 events, the White House orchestrates criminal investigations of dozens of Trump’s political enemies, starting with critics such as the ousted Representative Liz Cheney and whistleblowers such as the former White House aide Cassidy Hutchinson. With or without winning convictions, multipronged investigations and prosecutions bankrupt their targets financially and reputationally, menacing anyone who opposes the White House.

[David Frum: Trump is back on the ballot]

Most actions carried out by the Justice Department and national-security agencies remain routine in 2025 and beyond, but that doesn’t matter: No prosecution is above suspicion of political influence, and no Trump adversary is exempt from fear. Just as important is whom the government chooses not to prosecute or harass: It stays its hand against MAGA street militias, election shysters, and other allies of the president. The result is that federal law enforcement and the security apparatus become under Trump what Trump claims they are under Biden: political enforcers.

Fifth, weaponize the pardon. In Trump’s first term, officials stood up to many of his illegal and unethical demands because they feared legal jeopardy. The president has a fix for that, too. He wasn’t joking when he mused about pardoning the January 6 rioters. In his first term, he pardoned some of his cronies and dangled pardons to discourage potential testimony against him, but that was a mere dry run. Now, unrestrained by politics, he offers impunity to those who do his bidding. They may still face jeopardy under state law and from professional sanctions such as disbarment, but Trump’s promises to bestow pardons—and his threats to withhold them—open an unprecedented space for abuse and corruption.

Sixth, the final blow: defy court orders. Naturally, the president’s corrupt and lawless actions incite a blizzard of lawsuits. Members of Congress sue to block illegal appointments, interest groups sue to overturn corrupt rulemaking, targets of investigations sue to quash subpoenas, and so on. Trump meets these challenges with long-practiced aplomb. As he has always done, he uses every tactic in the book to contest, stonewall, tangle, and politicize litigation. He creates a perpetual-motion machine of appeals and delays while court after court rules against him.

Ultimately, however, matters come to a head. He loses on appeal and faces court orders to stop what he is doing. At that point, he simply ignores the judgments.

A famous precedent suggests that he would get away with it. In 1832, the Supreme Court ruled that states were illegally seizing Indian lands. President Andrew Jackson, a racist proponent of forced assimilation, declined to enforce the verdict. The states continued stealing Indian lands, and the federal government joined in. Trump, who hung a portrait of Jackson near his desk in the Oval Office, no doubt knows this bit of history. He probably also knows the consequences Jackson faced for openly defying the Court: none.

With reelection in the balance, defying the courts was a bridge the president did not cross in his first term. From the beginning of that term, when the Supreme Court scrutinized his Muslim travel ban, to the very end, when the Court swatted away his blitz of spurious election lawsuits, the judiciary was the strongest bastion of the rule of law. Its prestige and authority were such that not even a belligerent sociopath dared defy it.

Yet having been reinstated and never again to face voters, Trump now has no compunctions. The courts’ orders, he claims, are illegitimate machinations of Democrats and the “deep state.” Ordered to reinstate an illegally fired inspector general, the Justice Department nonetheless bars her from the premises. Ordered to rescind an improperly adopted regulation, the Department of Homeland Security continues to enforce it. Ordered to provide documents to Congress, the National Archives shrugs.

At first, the president’s lawlessness seems shocking. Yet soon, as Republicans defend it, the public grows acclimated. To salvage what it can of its authority, the Supreme Court accommodates Trump more than the other way around. It becomes gun-shy about crossing him.

And so we arrive: With the courts relegated to advisory status, the rule of law no longer obtains. In other words, America is no longer a liberal democracy, and by this point, there is not much anyone can do about it.

[Read: Trump soft-launches his 2024 campaign]

In the first term, resignation threats acted as a brake on Trump. They thwarted the Jeffrey Clark scheme, for instance. A resignation threat by the CIA director deterred Trump from installing a hack as her deputy. A resignation threat by the White House counsel deterred him from firing Special Counsel Robert Mueller.

Now, however, the president has little to fear politically, because he will never again appear on a ballot. If officials threaten to resign, he can replace or circumvent them. Their departures may slow him down but cannot stop him. Besides, he finds ways to remind his subordinates that angering him is a risky business. Noisy resignations will result in harassment by his supporters (the sorts of torments that hundreds of honest election officials have endured) and—you never know!—maybe by federal prosecutors and the IRS, too.

Might he go so far as to turn even Republicans in Congress against him? Unlikely. We should rationally assume that if Republicans protected him after he and his supporters attempted a coup, they will protect him no matter what else he does. Republicans are now so thoroughly complicit in his misdeeds that anything that jeopardizes him politically or legally also jeopardizes them. He already showed in his first term that he can and will stonewall congressional investigations. Unless Democrats drive Republicans into the political wilderness, overriding his veto (which requires a two-thirds vote of both chambers) is nigh-on impossible. Impeachment no longer frightens or even concerns him, because he has weathered two attempts and come back triumphantly.

Of course, there are congressional hearings, contempt-of-court orders, outraged New York Times editorials. Trump needn’t care. The MAGA base, conservative media, and plenty of Republicans in Congress defend their leader with whatever untruths, conspiracy theories, and what-abouts are needed. Fox News and other pro-Trump outlets play the role of state media, even if out of fear more than enthusiasm.

Meanwhile, MAGA forces are busy installing loyalists as governors, election officials, district attorneys, and other crucial state and local positions. They do not succeed in every attempt, but over the course of four years, they gather enough corrupt officials to cast doubt on the legitimacy of any election they lose. They invent creative ways to obstruct anyone who challenges them politically. And they are not shy about encouraging thuggish supporters to harass and menace “traitors.”

And so, after four years? America has crossed Freedom House’s line from “free” to “partly free.” The president’s powers are determined by what he can get away with. His opponents are harried, chilled, demoralized. He is term-limited, but the MAGA movement has entrenched itself. And Trump has demonstrated in the United States what Orbán proved in Hungary: The public will accept authoritarianism, provided it is of the creeping variety.

“We should not be afraid to go against the spirit of the age and build an illiberal political and state system,” Orbán declared in 2014. Trump and his followers openly plan to emulate Orbán. We can’t say we weren’t warned.

Read the whole story
ManBehindThePlan
394 days ago
reply
A bleak premise, a grim and blighted term, a horrifying future — all surrounding our country’s illiberal tendencies.
Share this story
Delete

More than your share

1 Comment

The math is simple: many people do less than they should.

They might be selfish, but it’s likely that they’re struggling with a lack of resources or a story of insufficiency. Either way, in any community or organization, many people contribute less than their peers.

Whether it’s splitting a check, getting a project done or making an impact on the culture or a cause, if you want things to get better, the only way is to be prepared to do more than your fair share.

Because we need to make up for the folks who don’t.

Read the whole story
ManBehindThePlan
397 days ago
reply
Having a serving mindset brings rewards past the immediate
Share this story
Delete

How NEPA works

1 Comment

The National Environmental Policy Act (NEPA) is a piece of federal environmental legislation that was passed in 1969 towards the beginning of an “eternal September” of environmental laws. NEPA is often called the “magna carta” of environmental laws because of how influential it has been in shaping environmental policy. Not only does NEPA significantly influence federal government actions, but the law has served as a template that has been widely copied, both by state governments (in the form of “little NEPAs” such as California’s CEQA), and by other countries.

Environmental laws over time. Not shown: another 30 years of environmental laws

NEPA is the law that requires federal agencies to produce an environmental impact statement for any actions likely to have significant effects on the environment. These statements (which can be thousands of pages long and take years to prepare, and must be completed before the project can start), along with the broader perception that the NEPA process is slow and unwieldy has made NEPA the frequent target of criticism and reform efforts.

Because NEPA impacts potentially every federal government action, in potentially very large ways, it’s worth understanding how NEPA works.

NEPA origins

NEPA evolved in a somewhat idiosyncratic fashion - it was originally intended as a sweeping environmental reform that would create a new policy of federal environmental stewardship:

the statute announces ‘the continuing policy of the Federal Government’ that federal agencies should ‘use all practicable means, consistent with other essential considerations of national policy, to improve and coordinate Federal plans, functions, programs, and resources’ to achieve such key environmental objectives as intergenerational trusteeship, provision safe and healthful surroundings, beneficial use of the environment, preservation of cultural and natural heritage, and protection of renewable resources.

But after NEPA was passed, these broad, sweeping reforms ended up amounting to little - courts found the provisions were too vague to be enforced, and they were largely ignored by federal agencies.

However, late in the process of drafting NEPA, a seemingly minor provision was added that required agencies to produce a “detailed statement” of the environmental impacts of any major federal action. This provision was a much clearer requirement that the courts enforced vigorously:

Mandatory procedures were something courts could and would enforce, especially under the command of unambiguous statutory terms like ‘all’, ‘shall’, ‘every’, and ‘detailed statement’. The threat of judicial enforcement, in turn, prompted agencies to be attentive to procedural detail, lest important agency actions be held up by litigation and injunction. Procedure soon overtook substance, transforming NEPA into what has become, in practice, almost purely a procedural statute.

The years immediately following the passage of NEPA resulted in a flurry of litigation as courts determined exactly what was required of the “detailed statement” NEPA mandated. To clarify what NEPA compliance required, the Council of Environmental Quality (CEQ), an executive branch organization created by NEPA and charged with overseeing its implementation, issued a series of guidelines in 1971. In 1978, these guidelines became regulation, creating the “modern” NEPA process we have today.

The NEPA process

NEPA as it exists today has largely become a procedural requirement - NEPA doesn’t mandate a particular outcome, or that the government places a particular weight on environmental considerations [0]. It simply requires that the government consider the environmental impact of its actions, and that it inform the public of those considerations. NEPA doesn’t prevent negative environmental impacts, so long as those impacts have been properly documented and the agency has taken a “hard look” at them - as one agency official described it, “I like to say you can pave over paradise with a NEPA document.”

More specifically, NEPA requires that a “detailed statement” be produced describing any significant environmental impacts of “major” federal actions. “Major federal action” might be anything from:

Or anything else that could possibly have significant environmental impacts. In practice, little effort seems to be placed on determining whether an action qualifies as “major”, and anything that might have significant environmental effects in practice must be NEPA compliant. 

(There are also a small number of federal actions that are excluded from NEPA compliance at all. These include emergency actions that must be taken quickly.)

Determining whether a detailed statement is required has evolved into a tiered system of NEPA analysis. (This tiered system is not in the text of NEPA itself, but is the result of case law and the 1978 CEQ regulations.)

At the bottom, you have categorical exclusions (CEs.) These are actions that by their very nature have been determined to not have a major impact on the environment. For example, the FAA has determined that the acquisition of snow removal equipment can be categorically excluded, and the Bureau of Land Management has determined that “activities that are educational, informational, or advisory” can be categorically excluded. The vast majority of federal agency actions will generally fall under a categorical exclusion.

Categorical exclusions require the least amount of effort to complete, in some cases just requiring a form to be completed and signed (though as we’ll see, this can vary significantly by agency.) However, an action must fall under an approved class of action to be a CE, and adding a new type of excluded action can in fact be a significant effort, involving researching of past projects, a period of public commentary for the proposed exclusion, and approval by the CEQ.

Categorical exclusions are also sometimes added via legislative action. The Energy Policy Act of 2005, for instance, created several categories of categorical exclusion for certain oil and gas actions.

Originally few actions were classified as CEs, but as agencies have done more projects and gained a better sense of what projects will and won’t have major environmental impacts, more category exclusions have been added.

If an action can’t be categorically excluded, the next step for NEPA compliance is typically to figure out if the action will have “significant” environmental effects or not. If it’s unclear, an environmental assessment (EA) is performed, which is intended to be a high-level look at the proposed action to determine if the environmental impacts cross the “threshold of significance” and thus require a full environmental impact statement. If the EA finds no significant impacts, it issues a Finding of No Significant Impact (FONSI.) 

EAs are generally more effort than categorical exclusions, though they also can vary significantly in the amount of effort required to create them.

If the EA concludes that the proposed action will have significant impacts, the “detailed statement,” known as an environmental impact statement (EIS) is produced. An EIS describes the proposed action, the likely environmental impacts of that action, alternatives to taking the action (typically including ‘no action’), and plans for soliciting feedback from the public.

EISs have become long, involved analyses that take years to complete and are often thousands of pages in length. For instance, the most current EIS available on the EPA’s database (for a Forest Service forest restoration plan) comes in at 1294 pages (including appendices), and took over 6 years to complete. In the late 1980s, there was a minor government scandal when the Department of Energy spent $1.4 million printing and mailing 17,000 copies of the 8,000-page EIS for the Superconducting Supercollider (the statements weighed a combined 221 tons.)

NEPA process flowchart

Because an EIS is so time consuming and expensive to prepare, procedures have evolved that let agencies avoid doing them. One popular method is the “mitigated FONSI” - if an environmental assessment determines that a federal action is likely to have significant impacts, the agency can include mitigation measures that would bring the net impact below the threshold of significance, and not trigger the EIS requirement. For instance, for the proposed construction of a cellulosic biorefinery on wetlands, the Department of Energy was able to achieve a mitigated FONSI by (among other things) purchasing “wetland credits” from a “wetland mitigation bank.” More recently, the FAA required SpaceX to undertake 75 mitigation measures as part of a mitigated FONSI for its Boca Chica launch site license (these ranged from periodic water spraying to control particulates and fugitive dust, issuing notices to personnel regarding lighting during sea turtle nesting season, and preparing a report on the historic events of the Mexican War that took place in the area.)

NEPA agency by agency

One challenge with understanding NEPA is that there’s a great deal of variation from agency to agency, both in the size and scope of their NEPA efforts and how they structure their NEPA procedures.

Different agencies, for instance, differ greatly in the number of NEPA analyses they perform. Over the past 35 years, nearly 100 federal agencies have had to produce an environmental impact statement, but just 10 agencies are responsible for 75% of EISs.

Agencies charged with management of federal lands (such as the Forest Service and the Bureau of Land Management), and responsible for building large scale infrastructure (such as the Federal Highway Administration and the Army Corps of Engineers) are responsible for an overwhelmingly large portion of NEPA efforts - those 4 agencies are responsible for more than 50% of environmental impact statements produced over the last 35 years.

The same seems to be true for EAs (though the data here isn’t great.) In 2015 (the last year we have data), just 2 agencies (the Bureau of Land management and the Corps of Engineers) were responsible for more than 50% of environmental assessments.

(Note that some of the agencies that produce the most EAs aren’t the ones that produce the most EISs - HUD and the Department of Rural Development notably seem to produce a lot of EAs but relatively few EISs.)

Agencies also vary in the proportion of actions that require a higher tier of NEPA analysis. For the Department of Energy, for instance, 98% of their actions are categorical exclusions, 1.6% require an EA, and just 0.4% require an EIS [1].

For the Forest Service, on the other hand, 15.9% and 1.9% of agency actions require an EA or an EIS, respectively (~10x and ~4x the DoE rate.)

Inter-agency variation also shows up in the amount of effort required to produce a given analysis. The length of environmental impact statements, for instance, varies greatly from agency to agency:

As does the time to complete them:

The same is true if we look at lower tiers of analysis. For environmental assessments, some agencies seem to routinely produce very short ones. Every EA I found for the Corps of Engineers, for instance, was less than 100 pages, with many fewer than 30 pages (this USDA report suggests the Corps of Engineers has a very different NEPA process and culture than other agencies.) The FAA, on the other hand, seems to produce enormous EAs. Their EA for a new runway approach procedure at Boston Logan is over 2100 pages including appendices, and their EA for licensing SpaceX’s Boca Chica facility is over 1200 pages.

Likewise, the amount of work a categorical exclusion requires seems to vary greatly between agencies. For some, such as the Department of Energy, it’s mostly just filling out the proper form, and takes just 1 or 2 days to complete. In other cases, the categorical exclusion is a more substantial undertaking. The median time to complete a Forest Service categorical exclusion, for instance, was 105 days as of 2018. And for the Federal Highway Administration, Trnka 2014 notes that “The template documents for preparing FHWA CEs in some states are 20 or more pages long and routinely lead to documents of 100 or more pages.” This discussion of the Florida I-10 bridge replacement notes that it took several months to approve the CE, as it had to be signed off by an extremely backlogged Coast Guard. 

Here’s the FHWA describing the procedure that might be required to determine if a road falls under a categorical exclusion:

Now let's look at a second project where the widening of the road requires additional right-of-way from a public park. To decrease crashes on an existing roadway, shoulders will be added. In a site visit with the State DOT, it was determined that the CE [categorical exclusion] is likely appropriate under the D-list in the regulation. But because of the need for right-of-way and potential impacts to public land, the local public agency (LPA) is asked to:

• Conduct several studies related to the presence of endangered species and archeological sites

• Coordinate with FHWA to address resources and potential use of the public park

• Follow the State DOT's public involvement procedures for formal public input

The studies confirmed that there were no significant environmental impacts.

Variation between agencies also exists for NEPA analysis preparation times. Dewitt 2013 noted that between 2006 and 2010, though across all agencies average EIS preparation time went up, several agencies (such as the Corps of Engineers, Forest Service, and Highway Administration) had their preparation times fall. And looking at the DoE, their average EIS preparation time was fairly constant from the mid 1990s up until ~2011:

There can also be significant variation in NEPA operations within a single agency over time. The Department of Energy, for instance, was for many years an organization that treated NEPA compliance as an afterthought. But in 1989, a new Secretary of Energy (James Watkins) emphasized compliance with environmental laws (including NEPA), and substantially changed the organization’s NEPA procedures, and increased the number of EISs they performed:

Variation might also exist within a single agency between offices. Fleischman 2020 noted that “there appears to be substantial heterogeneity within the USFS concerning how NEPA processes are handled, in terms of both level of analysis (i.e., some offices perform many EISs, others many EAs or CEs) and time spent on analysis.”

NEPA trends

Let’s look a little closer at the data around NEPA analyses.

For EISs, as we’ve seen, these are long documents. As of 2018 the average page length of an EIS was 661 pages (including appendices.) And they take a long time to complete - as of 2018 the average time between the “Notice of Intent (NOI)” (when an agency files it’s intent to create an EIS) and the “Record of Decision (ROD)” (when it makes its official decision for how to proceed) is 4.5 years (and this likely understates the true preparation time, as work often begins before the NOI is filed.)

These seem to be increasing over time - the documented preparation time for EISs rose nearly 50% between 2000 and 2018:

For page length trends, I couldn’t find any overall summaries, but the DoE noted in 2017 that “the average length of DoE EISs have more than doubled over the past 20 years.”

The actual number of NEPA analyses, though, seem to be decreasing. Here’s final EISs filed per year over time:

(As far as I can tell, this does not include EISs completed under ARRA.)

Less data is available, but it appears something similar is true for EAS, with fewer and fewer of them being produced per year (though most of these datapoints are estimates and likely have wide error bars):

For CEs there’s even less data, but there’s some suggestive evidence. Fleischman 2020 noted that for the Forest Service the number of categorical exclusions is dropping over time:

It’s not clear what the mechanism is here. Over time, we should naturally expect fewer EAs and more CEs (once a type of project is understood to have few or no significant effects, future projects similar to it can often be a CE.) But this wouldn’t explain a drop across all levels.

One possible explanation is a reduction in federal resources devoted to NEPA/environmental compliance. A 2003 study noted that NEPA staff and budgets across all agencies had been repeatedly reduced, and staff were being asked to “do more with less”, and Fleischman 2020 noted that “flat or declining annual appropriations and dramatically rising fire suppression costs” were likely part of the reason for fewer Forest Service NEPA analyses.

NEPA costs

How much do these NEPA analyses cost?

Getting a clear answer to this is difficult, partially because there’s often no clear distinction as to what constitutes a “NEPA task”, partially because agencies in general don’t attempt to track this information, and partially because (once again) there’s significant variation from agency to agency. In general, costs track the time and effort required to perform the analysis, with CEs being cheaper than EAs, and EAs being cheaper than EISs.

  • In 2003 the CEQ estimated that “small” EAs cost between $5000 and $20,000, “large” EAs cost between $50,000 and $200,000, and EISs typically cost between $250,000 and $2,000,000.

  • A Forest Service official testified in 2007 that their CEs cost approximately $50,000 on average, EAs $200,000 on average, and EISs $1,000,000 on average.

  • The DOE (which tracked its NEPA contractor costs until 2017) noted that in 2016 their average EA cost was $386,000, and their average EIS cost was $7.5 million (CE costs were described as “not significant”.) EIS cost was relatively steady over time, while EA cost had seen a large recent trend upward

Most reports note that the cost for a NEPA analysis is typically a small fraction of the overall project cost (typically less than 1%.)

Like with all things NEPA, these costs seem to be heavily right-skewed, with a small number of EAs/EISs dramatically more expensive than others. The years where the DOEs average EIS cost spiked, for instance, were years when a particularly expensive EIS (such as the one for Yucca Mountain) was completed. In general, NEPA analyses seem to form a series of overlapping right-skewed distributions, with the longest and most arduous CEs more effort than the easiest EAs, and the largest EAs more work than the simplest EISs.

Even though most federal government actions fall under a categorical exclusion, the largest and most complex projects will invariably need a higher level of NEPA analysis, so in per-dollar terms the fraction of EAs and EISs is much higher. The Highway Administration, for instance, noted in 2001 that while 91% of projects could be classified as a categorical exclusion, this represented only 74% of project dollars.

NEPA, the courts, and uncertainty

The mechanism by which NEPA compliance is enforced is via the courts - agencies can be sued under the Administrative Procedures Act for not properly complying with NEPA (using an inappropriate tier of analysis for an action, ignoring certain likely impacts, etc.) NEPA lawsuits are common - the Department of Justice has noted that NEPA is the most litigated environmental law, and that 1 out of every 450 federal actions taken to comply with NEPA are challenged in court.

In practice, this creates something of a moving target for NEPA compliance. Agencies must be constantly monitoring court outcomes to determine what compliance requires (this is sometimes described as “NEPA common law”), and over time more and more potential impacts have had to be included in NEPA analyses. The National Association of Environmental Professionals helpfully publishes a relevant list of NEPA case outcomes in its annual report. 

The frequency of NEPA litigation is partly due to the fact that, while NEPA lawsuits often target legitimate inadequacies (such as not considering the risks of an infectious pathogen research facility being built near a major population center), they are sometimes used as a weapon by activist groups to try to stop projects they don’t like. While lawsuits can’t stop a project permanently, the hope is that a lawsuit will result in an injunction that stops the project temporarily, and that the delay will make the project unattractive enough to cancel:

  • An environmental activist opposing a missile defense project stated that “the hope is that delay [occasioned by NEPA litigation] will lead to cancellation...That’s what we always hope for in these suits.”

  • The executive director of the Surface Transportation Policy Project testified that “In the struggle between proponents and opponents of a…project, the best an opponent can hope for is to delay things until the proponents change their minds or tire of the fight.”

  • A “grassroots litigation training manual” produced by the Community Environmental Legal Defense Fund stated “in an area devoid of endangered species, impacts to waterways and floodplains or of federal funding, NEPA may be the only tool that grassroots groups have [to fight highway projects]”.

And Forest Service officials noted that many groups would simply oppose or litigate any project at all (often because this helped them raise more funding.)

While most NEPA analyses don’t get litigated (as of 2013 there were about 100 NEPA lawsuits filed per year), in practice the threat of a lawsuit seems to push agencies towards producing “litigation-proof” NEPA documents:

  • Mortimer 2011, for instance, found that NEPA leaders decide on the level of analysis to do primarily based on perceptions of public controversy and litigation risk.

  • One agency testified in 2005 to the House Committee on NEPA reform that “that litigation – or the threat of litigation – has in effect “forced” them to spend as much as necessary to create “bullet proof” documents.”

  • A report by the Forest Service notes that “Team members often believe that much of their work is ‘for the courts’ and not particularly useful for line officers who make decisions.”

  • Testimony from a mining executive in 2005 noted that “there are now very few issues an agency is willing to consider insignificant, due to concern about having their decision appealed”, concerns which have been echoed by other industry stakeholders.

And Stern 2014 notes that:

Despite calls for shorter NEPA documents in the Forest Service, most IDTLs and team members felt pressure to include what they considered to be otherwise unnecessary information in their NEPA documents. Interviewees described pressure from the DM in some cases to do so. In other cases, this was attributed directly to a general fear of the public pointing out that something was missed and reopening the NEPA process.

An example of this is the threat of lawsuits incentivizing the inclusion of more and more research in environmental analyses, regardless of its merits:

An excellent illustration of excessive analysis due to management uncertainty is the Beschta Report. Commissioned by the Pacific Rivers Council in 1995, eight scientists drafted a paper, “Wildfire and Salvage Logging,” commonly known as the Beschta Report.

The paper has never been published in any scientific or professional journal, nor has it ever been subject to any formal peer review. In 1995, Forest Service scientists and managers expressed strong reservations about the report, which contains many unsubstantiated statements and assumptions. Nevertheless, the courts have sometimes shown support.

Groups have challenged postfire recovery projects on the grounds that the Forest Service has failed to consider the Beschta Report. In four cases, the courts have ruled that Forest Service decisions violated NEPA because the associated records did not adequately document the agency’s consideration of the Beschta Report. In two other cases, courts have ruled in favor of the Forest Service on this issue.

In view of the court record, forest planners might feel compelled to thoroughly document their consideration of the Beschta Report’s principles and recommendations, even though the underlying land management issues are already addressed in the record. That includes documenting why some elements of the Beschta Report are not relevant to the specific proposed project.

The court record has inspired some groups to demand that the Forest Service consider other papers and articles supposedly relevant to proposed actions. Sometimes the proffered list of references exceeds 100 entries. To minimize the risk of adverse judicial opinions, land managers might feel constrained to fully document within the body of the NEPA document their detailed consideration of each and every paper or article.

This mechanism seems to be behind the length of NEPA documents. The 1978 CEQ regulations state that an EA should generally be less than 75 pages, and an EIS should generally be less than 150 pages. But the uncertainty of what’s needed to comply with NEPA, and the natural risk aversion of government agencies, pushes these documents to be longer and longer.

In practice, figuring out what “bulletproof” entails is difficult. Stern 2014 gives an example of the Forest Service trying to figure out what sort of watershed analysis model is most likely to hold up in court:

…In one case, for example, an IDTL asked the Regional Office whether they could use a particular watershed model that had been used elsewhere. Personnel in the Regional Office instructed the team not to use the model because it would represent a departure from the traditional approach used on the specific forest and could expose the process to additional external scrutiny by setting a new precedent. The IDTL described the response from the Regional Office after the ID team submitted their preliminary report, which did not include the model.

“It was like [from the Region], ‘Hey, you need to run some models because there was this court decision, and it was up-held because they had model information, so you got to run the model for this.’ [laughing] It was kind of like, ‘okay that’s a 180 from what you told us initially.’ And then after the model was run, and we sent the document out, [the Region came back and said], ‘Oh jeez, maybe you shouldn’t have run the model because… the court case was reversed.’ [laughing]”

The uncertainty that the NEPA process creates - how thorough of an analysis will be required, how long it will take to perform, what sorts of mitigations will be required, what sorts of follow-up analysis will be required, will the analysis get litigated - makes it difficult to plan projects with substantial NEPA requirements. A mining executive noted that the NEPA process has resulted in the US having unusually burdensome permitting requirements by world standards:

In considering a new project the first thing I am asked is how long will it take and what will it cost to get it permitted. I can answer this question with a high degree of confidence in most jurisdictions around the world, with the exception of the United States. When I first began working with NEPA in the mid 1980s the time and cost to prepare an EIS for a mining project took about 18 months and cost about $250,000-$300,000. Today [2005] an EIS for a mining project may take 5-8 years and cost $7-8 million or more, before factoring in expected appeals and litigation of the ultimate decision. Thus, it is very difficult to make business decisions in the US under the current permitting environment on federal lands.

Reitze 2012 notes that NEPA is used to increase the costs and unpredictability of fossil fuel development, in an attempt to make renewable energy more attractive by comparison. And Glen 2022 notes that uncertainty around NEPA litigation also makes planning renewable energy projects (in this case, wind power) more difficult and risky. A transmission line executive noted in 2009 that the uncertainty and unclear case law around considering climate change impacts had created a “nightmare” for him.

This uncertainty also makes changing NEPA somewhat risky. Experts have noted, for instance, that rules to accelerate NEPA processes or impose maximum timelines might result in more of them being challenged in court (by failing to take the proper “hard look”). One consultant for energy projects suggested that the Trump-era NEPA changes (which have since been rolled back) were likely to increase project uncertainty and delay of energy projects in the short term, as the changes would result in increased litigation.

Untangling the effects of NEPA

One challenge with understanding the effects of NEPA is that it’s typically just one of many environmental laws a project must comply with. This discussion of a North Carolina light rail project’s NEPA process, for instance, lists 13 other federal environmental laws and executive orders the project had to comply with, and this page lists 20 other federal regulations other than NEPA that govern Forest Service actions. NEPA is often referred to as an “umbrella statute” - an overarching process that organizations can use to manage the entire environmental compliance process.

This makes it unclear what delays are caused by NEPA, and what are actually caused by something else under the “NEPA umbrella.” For highway projects, Luther 2012 notes that “this use of NEPA as an “umbrella” compliance process can blur the distinction between what is required under NEPA and what is required under separate authority” and that “despite the focus on the NEPA process, it is unclear whether or how changes to that process would result in faster highway project delivery.” A 2014 report from the GAO came to similar conclusions, noting that the time for completing CEs and EAs often depended on how many environmental regulations and processes the project needed to comply with.

NEPA also tends to mask (and get blamed for) other sources of delay that occur during the NEPA process. If an organization de-prioritizes a project, or has insufficient funding for it, or runs into local opposition, that might show up as an extended time to complete the NEPA analysis, despite the fact that the delay was due to other factors.

One illustration of this is how quickly NEPA analyses often get done in urgent situations where everyone is aligned, and common sources of delay aren’t present. One example of this is distributing stimulus funds following the American Recovery and Reinvestment Act (ARRA) in 2009. Over 190,000 projects, totalling $300 billion dollars worth of stimulus funds, were required to have NEPA reviews before the projects could begin. After the passage of ARRA, categorical exclusions were completed at the rate of more than 400 per day, and 670 environmental impact statements were completed over the next 7 months.

Another example is following a bridge collapse, where state DoTs work quickly to restore the bridge as quickly as possible Following the I35 bridge collapse in Minneapolis, for instance, the environmental review for the replacement bridge was completed in less than 2 months. A report by the FHWA notes that following a bridge collapse, many of the frequent causes of highway project delay are absent:

NEPA and foregone benefits

Besides the direct costs of compliance, another potential cost of the NEPA process is the projects that don’t occur at all. The NEPA process is effectively a tax on any major government action, and like any tax, we’d expect it to result in less of what it taxes. Is there any evidence this occurs?

Several industry executives have stated that in some cases they choose not to do projects rather than navigate the NEPA process. The previously quoted mining executive, for instance, testified that “for most projects, the time, cost, and uncertainty of obtaining approvals is simply too great in the United States, and mining investment looks elsewhere. The cumbersome NEPA process is key to this consideration.” And an executive from Vulcan Materials noted that “in some cases we conclude that the process is so burdensome we choose to not pursue aggregate resources rather than work through the drawn out and costly NEPA process.” 

This also seems to occur within government agencies. Stern 2014 found that Forest Service planners alter their plans to make them less ambitious as a way to avoid NEPA lawsuits.

Some evidence is indirect. For instance, a GAO study found that 33 states had avoided taking federal money for highway projects specifically so they could avoid the NEPA process. And Culhane 1990 suggests that NEPA compliance was partly to blame behind the decrease in new Corps of Engineers projects:

Seven short years of EIS commenting and public participation reduced the Corps from its position as the premier powerful agency in the federal bureaucracy to the debacle of President Carter's "hit list."8 " After the "hit list" affair, the Corps endured a severe drought, and authorized no new water projects until 1984.

Because the NEPA process adds a large review time to the beginning of a project, this sometimes screens off projects that by their nature need to be completed quickly. A Forest Service report noted that a drawn out environmental analysis and litigation process slowed a prescribed burn project meant to reduce the risk of wildfire, and the wildfire eventually occurred:

In December 1995, a severe winter storm left nearly 35,000 acres of windthrown trees on the Six Rivers National Forest in California. The storm’s effects created catastrophic wildland fire conditions, with the fuel loading reaching an estimated 300 to 400 tons per acre—ten times the manageable level of 30 to 40 tons per acre. 

The forest’s management team proposed a salvage and restoration project to remove excessive fuels and conduct a series of prescribed burns to mitigate the threat to the watershed. From 1996 through the summer of 1999, the forest wrestled its way through analytical and procedural requirements, managing to treat only 1,600 acres. 

By September 1999, nature would no longer wait. The Megram and Fawn Fires consumed the untreated area, plus another 90,000 acres. Afterward, the forest was required to perform a new analysis of the watershed, because postfire conditions were now very different. A new round of processes began, repeating the steps taken from 1996 to 1999. 

Seven years after the original blowdown, the Megram project was appealed, litigated, and ultimately enjoined by a federal district court. The plan to address the effects of the firestorm—a direct result of the windstorm—remains in limbo.

Because it adds cost and uncertainty to any new major project, NEPA is effectively a bias towards the status quo. As one environmental lawyer noted, “NEPA, being procedural and not substantive, is a hefty sword. It stops the projects many groups do like, along with the ones they don’t like.”

Like with any process that solicits public input, NEPA also seems easily captured by small groups with strongly held opinions, and thus subject to standard NIMBY effects. In some cases, it seems like NEPA makes projects unusually susceptible to a minority of strong opinions - for instance, agencies have noted that NEPA project challenges often originate from opponents “who are based out of state and not part of the communities they purport to represent.”

NEPA benefits and project management

It’s easy to find criticism of the NEPA process, but what are the benefits of it?

A report from the USDA notes that “the literature points to only a few effects of NEPA upon agencies’ planning processes that are not widely debated”, which are:

  • Agency staff becoming less homogenous (staff have a wider range of training and backgrounds)

  • Increased transparency of agency analysis and decision making

  • A wider range of alternatives for projects is typically considered.

But the report notes that “these shifts have come with associated costs of long delays in decision making as analyses are performed and reports are produced, and of high-priced responses to litigation of the agency processes.”

Some agency officials have also admitted that following the NEPA process resulted in considering alternatives that they wouldn’t have otherwise looked at:

One of our alternatives… I really don’t believe it would have been there but for the public involvement. We had a lot of people say…we just don’t want you to do any timber harvests. We just want you to thin the stuff and leave it. Nothing commercial. And you know, most of us just rolled our eyes and said, “Oh, that’s ridiculous, we can’t possibly do that.” And so, we said “OK, we’ll go through the process honestly and put up with this dumb idea.” Turned out the alternative was very feasible and was really quite reasonable and quite reasonably effective. It wasn’t as effective as some of the other alternatives, but you know, at the start we would have just completely discarded it except that we had a lot of people clamoring for it.

The Natural Resources Defense Council (NRDC), an environmental advocacy group and major NEPA champion (they’re part of the ProtectNEPA.org coalition, for instance), describes the benefits of the NEPA process:

At the heart of this review process is the agencies' obligation to consider alternatives to their original project designs, which motivates them to think outside the box, resulting in better projects that save money and reduce negative impacts. It also gives members of the public a voice in project design by letting them suggest alternatives, which promotes collaboration in planning and buy-in on final decisions.

And they list a few examples of successful NEPA processes:

  • “In California, the NEPA review process exposed the devastating impacts of the Army Corps of Engineers' plan to dredge the Bolinas Lagoon, one of the most pristine tidal lagoons in the state. While the proposal aimed to prevent silting in the lagoon, environmental reviews actually found that it would increase siltation. As a result, this misguided plan was abandoned in 2001, saving taxpayers $133 million. “

  • “The Department of Housing and Urban Development (HUD) proposed to construct the Palestine Commons Senior Living Facility project -- 69-units of elderly housing in a three-story structure in Kansas City, Missouri. HUD planned to build the facility on an old petroleum-tank site to contribute to Kansas City's redevelopment plan and support community revitalization. However, the NEPA process revealed potential soil and groundwater contamination on the site. Thanks to this law, the project plan was modified to include site remediation and thereby protect the facility's future residents.”

  • “The Route 52 causeway between Ocean City and Somers Point, first built in the 1930's, faced restricted lane and speed usage as it fell into disrepair, and the lack of shoulders posed a safety hazard to motorists. New Jersey and the Federal Highway Administration sought to rebuild the route to better serve the area. Thanks to input from area residents and other federal agencies during the NEPA process, the final environmental impact statement identified an alternative that minimized the route's environmental and socioeconomic impacts. For example, the final project avoided potentially extensive dredging and damage to wetlands as well as extensive property takings and changes in land usage. New bike paths, walking trails, and boat ramps are part of the causeway and mitigation measures were taken to account for the limited dredging and wetlands loss. Construction was finished in 2012.”

(Many more available at the link)

NEPA is often described by proponents as “a tool to make decisions” - the CEQ regulations, for instance, state that “The NEPA process is intended to help public officials make decisions that are based on understanding of environmental consequences” and that “NEPAs purpose…is to foster excellent action.” But it’s unclear how successful it is at this. In a survey of 25 NEPA officials, only 4 described the quality of decision as mattering for whether the NEPA process was successful or not, and that “staff interviewed in this study tended to focus more upon the processes through which they could complete the procedural requirements of the act with least resistance.” A 1986 study of environmental impact statements found that only 30% accurately predicted a project's impacts, with most too vague or abstract to evaluate. It also seems telling that no organizations I’m aware of willingly adopt the NEPA process.

Reading through examples of successful NEPA processes, it seems as if the purpose of the NEPA process as it exists is to try and legislate good project management - ensure that requirements are gathered upfront, the relevant laws are considered, many possible solutions are proposed, relevant stakeholders are brought on board and have their views considered, etc. - and that many examples of NEPA “successes” (such as “ensuring compliance with all relevant laws”) are things that a competent project manager would have done anyway.

Overall, it seems like NEPA has some level of success at improving average project outcomes (probably by screening off certain types of extremely poor planning), but if you think the government needs to be better at project management there are probably better ways of doing that. And of course, any success would have to be balanced against the large costs that the NEPA process incurs.

NEPA as an anti-law

The more you look at NEPA, the more it seems like a very strange law.

Arguably, the purpose of a law is two-fold:

  1. To prevent or encourage some particular thing that society thinks is good/bad. Laws against drunk driving exist, for instance, because drunk driving is harmful and as a society we want less of it.

  2. To solve coordination problems and create predictability. Laws that enforce driving on one side of the road and not the other, for instance, exist not because one side is better than the other but because it’s useful if everyone agrees, and everyone knows what to expect from other drivers.

NEPA arguably does neither of these things.

For the first, as we’ve seen NEPA does not require environmental impacts be limited, only thoroughly documented through a specific process. (To the extent that NEPA does result in reduced environmental impacts, it seems like an indirect effect of making doing anything procedurally costly.) And the thing that NEPA does create directly - additional government process - is something we want less of.

For the second, NEPA doesn’t create predictability. In fact, it greatly reduces predictability and increases coordination cost and risk, because it’s so unclear what’s needed to meet NEPA requirements. Agencies are forced into risk-reduction strategies that at best require spending significant time and resources to defend against possible litigation, and at worst mean projects don’t happen at all.

[0] - This is in contrast to CEQA, which does require that environmental impacts be mitigated.

[1] - The Department of Energy isn’t necessarily the most representative agency with respect to NEPA implementation, but much more data is available for it - between 1994 and 2017 the DoE issued a quarterly “NEPA best practices” newsletter that included, among other things, costs and completion times for EISs and EAs.



Read the whole story
ManBehindThePlan
404 days ago
reply
The law of unintended consequence is strong in this one
Share this story
Delete

Unreliable Connection

1 Comment and 7 Shares
NEGATIVE REVIEWS MENTION: Unreliable internet. POSITIVE REVIEWS MENTION: Unreliable internet.
Read the whole story
ManBehindThePlan
407 days ago
reply
The last mile always ends up in feet
Share this story
Delete

Alex Jones At The Tower of Babel

1 Comment

Pilate therefore said unto him, Art thou a king then? Jesus answered, Thou sayest that I am a king. To this end was I born, and for this cause came I into the world, that I should bear witness unto the truth. Every one that is of the truth heareth my voice.

Pilate saith unto him, What is truth? And when he had said this, he went out again unto the Jews, and saith unto them, I find in him no fault at all.

When Jesus appeared before Pilate, they spoke different languages. I don’t mean that literally — although maybe they did speak different languages and used a translator, or maybe spoke Aramaic, or Latin. I mean that they used language in completely different ways. Jesus was preaching. Pilate was judging. Jesus was talking about truth with a capital T. Pilate was trying to focus Jesus on the practicalities of the case, and perhaps making a mordant quip about the futility of the process when he said “what is truth.” There was no meeting of the minds.

When modern American political culture winds up in court, the effects are similar. The participants are speaking different languages, and using language in different ways. Courts are focused on a taxonomy of words. Are they factual? Are they opinion? Are they literal or figurative? Courts also care about the literal truth of words. That’s central to defamation law — it’s not defamatory unless it was false. Courts are about analysis, and the entire project of the law is about words meaning specific things.

But modern American political culture is emotive and even artistic. It uses language like a musician uses notes or an impressionist uses brush strokes. Whether it’s Marjorie Taylor Greene talking about Bill Gates' efforts to colonize our bowels through "peach tree dishes" or Alex Jones ranting about gay frogs, modern politicians and pundits use language to convey feelings and attitudes and values, not specific meanings. If you demand Alex Jones defend the specific meaning of his words, it’s like demanding your eight-year-old defend his statement that his birthday party was the best day ever when previously that’s what he said about Disneyland. Trump was the Salvador Dali of this movement, his speeches full of melting clocks of ire and resentment. As an artist of lies he was prolific.

I’m offering a descriptive observation, not a positive normative judgment. Truth exists. Truth matters. Even if Alex Jones’ broadcasts are dreamscapes of spleen, they have real-world effects. Some people take them literally and act accordingly, as we’ve seen as the parents of murdered children tell their harrowing stories of the harassment Jones encourages. And a society where words are unaccountable, where language is just us finger-painting with our own shit, is ungovernable and unlivable.

The point is that courts are ill-equipped to deal with people like Alex Jones, and people like Alex Jones are ill-equipped to deal with courts. Jones’ catastrophic testimony in his own defense illustrates this. Jones struggled to fit his bombast within the framework of the law, within the distinction between fact and opinion. It’s a bad fit because that’s not how he uses words. If Jones had been honest — an utterly foreign concept to him — he might have said “I just go out there and say what I feel.” The notion that Sandy Hook was a hoax is a word-painting, a way of conveying Jones’ bottomless rage at politics and media and modernity, and he can no more defend it factually than Magritte could defend the logical necessity of a particular brushstroke.

It’s fit that Alex Jones is held accountable for the impact of his words. He used false statements of fact to paint his picture, and those false statements of fact caused harm. But I suspect that a vast judgment against Jones won’t have much value as a deterrent or proclamation of truth. Jones is loathsomely rich because people want to consume his art. His landscapes of hate and fear and mistrust resonate with a frightening number of Americans. The people who enjoyed his Sandy Hook trutherism didn’t enjoy it because it was factually convincing or coherent; they enjoyed the emotional state it conveyed because it matched theirs. The plodding technicalities of law are probably inadequate to change their minds.

Defamation cases like this one — or Dominion’s case against Sidney Powell, or the parade of defamation claims against Trump — are just, and it’s just that the victims receive compensation. But they don’t solve the problem. America can survive the demagogues themselves, it’s their audience that will kill us.



Read the whole story
ManBehindThePlan
413 days ago
reply
Mordant commentary on the proletariat
Share this story
Delete

Hydroponics: Growing an Appreciation for Plants

1 Comment and 3 Shares

I once heard a saying – “Don’t feel pity on plants because they can’t move. Feel pity on us, because we have to”. I really didn’t have an appreciation for what this meant until the COVID pandemic hit, which restricted my movement for a couple of years, and I decided to spend some of my new-found time at home learning how to raise plants in my little flat in central Singapore. The result is a small hydroponics system that now lines the sunny windows of my place, yielding fresh herbs weekly that I incorporate into my dishes.

For me, hydroponics really drove home how remarkable plants are: from a bin containing nothing but water and salts, a fully-formed plant emerges. No vitamins, amino acids, or other nutrients – just add sunlight, and the plant produces everything it needs starting from a single, tiny seed. The seed encodes every gene it needs to survive and reproduce – our basil plant, for example, is tetraploid, which means it has four copies of every gene. Perhaps this somewhat explains the adaptability of plant clones – it is almost as if every branch on our basil bush has a separate character, each one trying a different angle at survival. Some branches would grow large and leafy, others small and dense, and if you propagate by a cutting, the resulting plant would inherit the character of the cutting. Thus, a lone plant should not be mistaken as lonely: it needs not a mate to create diverse offspring. Every tetraploid cell contains the genetic diversity of two diploids (whereas a human is one diploid), allowing it to adapt without need of sex or seedlings.

I also did an experiment and grew some sage from seed, and planted one set in dirt and another in hydroponics. Even though from the same seed stock, the resulting individuals bore little resemblance to each other. The dirt-grown sage looked much like the herb you’re familiar with in the grocery store – dark green, covered with fine hair, and densely arranged on a stem. The hydroponically grown sage instead grows like a vine, with long thin green stems between each leaf, the leaves themselves having a lighter color and less hair. The flavor is even a bit different; the hydroponic sage emits a slightly sulfurous odor when disturbed, and exhibits a bit more mint on the palate when eaten.

Even more fascinating is how the plants seem to “groom the water”. I’ve noticed that the most successful plants we’ve tried to grow can lower the pH of the water on their own, and regulate it within a fairly consistent band (more on this later!). Furthermore, they seem to have recruited commensurate organisms to live among their roots. The basil grows long white or translucent roots with a pale white mycorrhiza, while the sage has a brownish symbiont and a short, bushy root ball. Thus I only fully replace the water of the hydroponic system as a last resort if a plant seems diseased; normally I will cycle the water by removing about a half of the reservoir and topping it up, so as not to displace the favored microbes from the ecosystem.

The Setup

The initial inspiration to try hydroponics actually came a bit by chance. We bought some locally grown hydroponic lettuce, and noticed that they were packaged as whole plants, complete with roots. We were curious – could we pluck most of the leaves of this lettuce, and then stick the plants in water, and grow another serving of hydroponic lettuce?

Surprisingly enough, it worked! Even with a crude setup consisting of a handful of generic plant fertilizer and a small aquarium bubbler, we were able to take a single plant and grow a couple more servings of lettuce from it. Unfortunately, with time, the plants started to grow very “stemmy” and pale, and eventually they succumbed to tiny mites that infested their leaves.

Inspired by this initial success, I started to read up a bit on how others did hydroponics. One of the top hits is a blog by Kyle Gabriel, detailing how he built an extremely sophisticated system based around a Raspberry Pi, and a multitude of sensors, valves and pumps. It was sort of a nerd’s dream of how farming could be fully automated. I figured I’m pretty handy with a soldering iron, so maybe I could give a go at building a system like his. So, I dug up a spare Raspberry Pi, some solid state relays and white LEDs left over from when I did the house lighting, and put together a simple system that just automated the lighting and took hourly photos of the plants as they grew. The time-lapses were fascinating!

You can’t really watch a plant grow in real time, but, over a period of days one can easily see patterns in how plants grow and adapt.

With this small success, I put my mind toward further automation – adding various pumps and regulators for the system. However, as I started to put together the BOM for this, I realized very quickly that there was going to be no return on investment for building out a system this complicated. Plus, I really didn’t like that the whole system ran on code – I did not relish the idea of coming home to a room flooded with water or a set of rotten plants because my control program hit a segfault.

So, I sat back and thought about things a bit. First, one observation I had was despite providing the plants with a 10,000 lux light source 12 hours a day, they still had a tendency to grow toward the nearby window. As an experiment, I took one bin and removed it from the regulated light source, and just stuck it up against the window. The plant grew much better with natural sunlight, so, I removed all the artificial lighting, unplugged the Raspberry Pi, and just stuck all the plants against the windowsill (it definitely helps that I live one degree off the equator – it’s eternally summer here, with sunrise at 7AM and sunset at 7PM, 365 days a year). I was happy to save the electricity while getting bigger plants in the process.

For water level automation, I replaced the computer with two float switches in series. One switch cuts off the pump if the water level gets too low in the feed reservoir; the other cuts off the pump if the water level gets too high in the plant’s growth bin. You can use the same type of switch for both purposes; by just mounting the switch upsidedown you can invert the function of the switch.

The current “automated” system, consisting of a reservoir on the left, with a peristaltic pump on top of the reservoir bin, and two float switches. The silicone tube that takes the solution from the reservoir to the plant bin is covered by an old sock to prevent algae from growing in the solution when it’s not moving. There is also an aeration pump, not visible in this photo.

The float switch mounted on the top, functioning as a “break-when-full” switch. You can see the plant’s roots have taken over the entire bin! A couple of spacers were also added to adjust the height of the water.

The float switch mounted on the bottom of a tank, functioning as a “break when empty” switch. In order to provide clearance for the switch on the bottom, a couple of wine corks were hot-glued to the bottom of the bin. The switch comes with a rubber o-ring, creating an effective seal and no leakage.

So, with a couple of storage bins from Daiso, two float switches and a peristaltic pump, I’ve constructed a system that automates the care of our plants for up to two weeks at a time for under $40. No transistors required – just old-school technology dating from the 1800’s!

There is one other small detail necessary for hydroponics – an aeration pump. Any aquarium pump will do – although we eventually upgraded to some fancy silent pumps instead of the cheaper but noisier diaphragm based ones. Some blogs say that the “roots need oxygen” to survive, but my suspicion is actually that the pumps mostly serve to circulate the nutrient solution. If you leave the pump off, the roots will rapidly deplete the water around them of nutrients, and without any circulation you’re relying purely on a slow process of diffusion for nutrients to reach the roots. I’ve noticed that on bins with a low air flow, the roots will grow thick and matted, but bins with a faster air flow, the roots barely need to grow at all – my hypothesis is this reflects the plant allocating less resources toward root growth in bins with greater circulation, because fresh solution is always available with faster circulation.

The Tricky Bit

The electronics were actually the easiest part of the whole enterprise; the hardest part was figuring out what, exactly, I had to add to the water to get the plants to flourish. Once I got this right, the plants basically take care of themselves; of course it helps to pick plant varietals that are pest-resistant, and have the innate ability to regulate the pH level around their roots.

When I started, I was naively aware that plants needed nitrogen-bearing fertilizers. Reading the label on packaged fertilizer solutions, they use an “NPK” system, which stands for nitrogen-phosphorous-potassium. OK, sure, so plants need a bit more than just nitrogen. Surely I could just pick up some of this NPK stuff, dissolve it in some water, and we’re good to go…

…but how much of this should I add? This deceptively simple question lead me down a several-month rat-hole that took many failed experiments and daily journals of observations to find an answer. The core problem is that most plant bloggers like to use units like “one handful” as a unit of measure; the more precise ones would write something to the effect of “one capful per gallon”. As an engineer, units of handfuls and capfuls are extremely dissatisfying: how many grams per liter, dammit!

This lead me to research several academic papers about plant nutrition, which lead to reading graphs about plant growth under “controlled” conditions that lead to astonishingly contradictory results to what the plant bloggers would write: the NPK ratios implied by some of the academic works were wildly different from what the plant bloggers relayed in their actual experience.

It turns out the truth is somewhere in between. A big confounding factor is probably the nature of the soil used in the research, versus the base quality of the water used in your hydroponic system. Most of the research I uncovered was written about fertilizing plants grown in soil, and for example “loamy diatomaceous earth” turns out to be quite a complicated mix of nutrients in and of itself.

The most informative bit of research that I uncovered was experiments done where they would ash a plant after it was grown and measure out all the base elements from the resulting dry weight of the plant. It was here that I learned that, for example, molybdenum is absolutely essential to the growth of plants. It’s almost never mentioned in soil cultures, because dirt almost always has sufficient trace quantities of molybdenum to sustain plants, but water cultures quickly become molybdenum-deficient, and the plants will become pale and sickly without a supplement.

I also learned that plants need calcium and magnesium in astonishingly large quantities; as much as they need phosphorous and potassium. Again, these two nutrients are less discussed in soil-based literature because many rocks are basically made of calcium and magnesium, and as such plants have no trouble extracting what they need from the soil.

Finally, there is the issue of iron. Iron turns out to be the hardest nutrient to balance in a hydroponic system. Despite being extremely plentiful on Earth, and indeed, possibly being the penultimate composition of the entire universe, it is extremely scarce as a free atom in the biosphere. This is in part because it gets strongly bound to other molecules. For example, oxygen binds to myoglobin with a log K1 of 6.18, which means that it is a million times more likely to find oxygen bound to myoglobin than unbound in solution. This may sound strong, but EDTA, a chelating agent, has a log K1 of something like 27.7, so it is one octillion (1,000,000,000,000,000,000,000,000,000) times more likely to exist as bound to iron than unbound in equilibrium. In a way, iron is so biologically important that organic life had an arms race to bind free iron, and some ridiculously potent molecules exist to rapidly sweep the tiniest amount of iron out of solution. Fortunately, as long as I (or more conveniently, the plant itself) can keep the pH of the water below 5.5, I can take advantage of the extremely strong binding of EDTA to iron to keep it dissolved in solution and out of reach of other organisms trying to scavenge it out of the water. The plants can somehow take in the bound iron-EDTA complex, degrade the EDTA and extract the iron for its use (this took a long time and many trials with various iron binding agents to figure out how to remedy the chlorosis that would eventually take over every plant I grew).

Alright, now that I have a vague understanding of the atoms that a plant needs to survive, the question is how do I get them to the plant – and in what ratios? The answer to this is equally as vague and frustrating. You can’t simply throw a chunk of magnesium metal into a bin of water and expect a plant to access it. The magnesium needs to be turned into a salt so that it can readily dissolve into the water. One of the easiest versions of this to buy is magnesium sulfate, MgSO4, also known as epsom salts. So, I can just read the blogs and find the ones that tell you how many grams of magnesium sulfate to add per liter of water and be done with it, right?

Wrong again! It turns out that MgSO4 has several “hydration states” (11 total). Even though it looks like a hard, translucent crystal, Epsom salt is actually more water than magnesium by weight, as 7 molecules of water are bound to every molecule of magnesium sulfate in that preparation.

Of course, no plant blogger ever specifies the hydration states of the salts that they use in their preparations; and many on-line listings for agricultural-grade salts also fail to list the exact hydration state of their salts. Unfortunately this means there can be extremely large deviations in actual nutrient availability if you purchase a dissimilar hydration state from that used by the plant blogger.

That left me with purchasing a set of salts and trying to calculate, from first principles, the ratios that I needed to add to my hydroponics bins. The salts I finally decided on purchasing are:

  • Monopotassium phosphate (anhydrous) K2PO4
  • Potassium sulfate (anhydrous) K2SO4
  • Calcium nitrate Ca(NO3)2•4H2O – hygroscopic
  • Magnesium sulfate MgSO4•7H2O

Plus a pre-mixed micronutrient from a local hydroponics shop that contains the remaining essential elements in the following ratios:

  • Iron as EDTA chelate 21.25 mg/mL
  • Manganese 5.684 mg/mL
  • Boron 0.483 mg/mL
  • Zinc 0.617 mg/mL
  • Copper 0.267 mg/mL
  • Molybdenum 0.471 mg/mL

For the salts, I computed a matrix that allows me to solve for the amount of nutrient I want in solution, by taking the mass fraction of each nutrient available, writing it in matrix form, and then inverting it (had to crack open my linear algebra book from high school to remember what determinants were! Who knew that determinants could be useful for farming…).

You can make the matrix yourself by expressing the ratio of the milligrams of nutrient (as derived by the atomic weight of the nutrient) per milligram of compound (as derived by summing up the weight of all the atoms in the molecular formula, including the hydration state), and putting it into a matrix form like this:

And then taking the coefficients into an inverse matrix calculator and deriving a final format that allows you to plug in your desired NPK ratio and compute the mass of the salts you need to dissolve in water to achieve that:

As a sanity check, I plug the calculated weights back into the forward matrix to make sure I didn’t mess up the math, and I also add up all the dissolved solids to a TDS (total dissolved salts) number, so I can cross-check the resulting solution easily using a cheap TDS meter (link without referral code). In case you want to start from a template, you can download the spreadsheet. The template contains the pre-computed ratio that I currently use for growing all my herbs with compounds that I can source easily from the local market, and it seems to work fairly well for plants ranging from brazilian spinach to basil to sage.

As a side note, calcium nitrate is pretty tricky to handle. It’s very hygroscopic, so if left in ambient humidity, it will absorb water from the atmosphere and “melt away” into a concentrated, syrupy liquid. I usually add a few percent extra by weight over the formula to compensate for the excess water it accumulates over time. Also, I store the substance in an air-tight bag, and I always wear nitrile gloves while handling the compound to avoid damaging my hands.

For the micronutrients, it’s a bit trickier to dose correctly. Fortunately, I have a micropipette set that can measure out solutions in the range from 1uL to 200uL, from back when I did some genetic engineering in my kitchen (pipettes are also surprisingly cheap (without referral code) now). Again, the blogs are not terribly helpful about dosing – you get advice along the lines of “one drop per bucket” or something like that. What’s a drop? What’s a bucket? The exact volume of a drop depends on the surface tension and viscosity of the liquid, but I went with the rule of thumb that one drop is 50uL (20 drops per mL) as a starting point.

Initially, I tried 60uL of micronutrients per 1.5L of solution, but the plants started to show evidence of boron poisoning (this is a great guide for diagnosing plant nutritional problems based on the appearance of the leaves), so after a few iterations and replacements of the water to flush out the excess accumulated micronutrients, I settled on 30uL of micronutrients per 1.5L of solution, with a 15uL per week bump for iron-hungry species like spinach.

At a microliters-per-week consumption rate, even the smallest bottle of 150mL micronutrient solution will last years, but the tricky part is storing it. In order to avoid contaminating the bottle, I aliquot the solution every couple months into a set of 1.5mL eppendorfs which I keep in my wine fridge, alongside the original bottle. Even though I try my best to avoid contaminating the eppendorfs, after a couple weeks a pellet forms at the bottom from some process that is causing the micronutrients to come out of solution, so I typically end up discarding the aliquot before it is entirely used up.

The Final Result

It’s pretty neat to go from a pile of salts to delicious herbs. About a gram of salts go in, and a week later a couple dozen grams of leaves come out!


In go salts…


Out comes basil!

Basil in particular has been a real champ at growing in our hydroponics bins – we are at the point now where between two plants, we’re regularly giving it away to friends because it yields more than we can eat, even though I cook Italian food almost every other night. A handful of basil, a bit of salt, olive oil, tomatoes and garlic and we have a flavorful bruschetta to kick off a meal! Our other favorite is sage, it’s great for flavoring pork and poultry, but it’s very hard to find for some reason in Singapore. So, having a bit of fresh sage around is convenient and saves us a bit of money since it can be quite expensive to buy in specialty stores.

It’s been less practical to grow bulk vegetables, such as spinach. Brazilian spinach has been fairly successful in terms of growth, but it takes about a month for a cutting to grow to maturity, and we need about four plants to make a salad, so we’d need several racks of bins to make a dent in our vegetable consumption. Also, in general our herbs have had less pests than leafy green vegetables; maybe their strong flavor comes from compounds they produce that also serves to repel bugs? So, in addition to being great flavor for our sauces, the herbs have required no pesticides.

Overall, it was satisfying to learn about plant biology while developing a better connection to my food through technology. It was also a calming way to pass time during the pandemic; agriculture requires patience and time, but the reward is visceral. Having kept a miniature farmer’s almanac to decode missing pieces of information from the blogosphere, I have an new appreciation for how such personal journals could lead to scientific discoveries. And, I’m a much better chef than I was a couple years ago. Somehow, just having the fresh herbs around inspired explorations into new and exciting pairings; it gave me a whole new way to think about food.

Read the whole story
ManBehindThePlan
413 days ago
reply
Don't let an engineer around farmers - they'll upend all the mystery
Share this story
Delete
Next Page of Stories