LOTOcracy

Leave a comment

I’m thinking that our experience with President Trump has pointed out areas of our government that require some adjustments, modifications, replacements, fine tuning, …  If we don’t learn from our mistakes, then …

A extreme solution below, was presented by ChatGPT, following my prompting:

https://en.wikipedia.org/wiki/Sortition

“lotocracy” (rule by random selection)

That idea is very real today, but it’s usually used in limited or experimental ways rather than running entire governments.

Here are clear, real-world examples where *lotocracy-style selection* is used today:

Citizens’ Assemblies (most prominent modern use)

These are groups of everyday people **randomly selected** to study and advise on major issues.

– **Ireland Citizens’ Assembly**

– Random citizens helped shape decisions on abortion laws and climate policy

– Their recommendations directly influenced national referendums

– **France Citizens’ Convention on Climate (2019–2020)**

– 150 randomly selected citizens proposed climate laws

– Many proposals were adopted or debated in parliament

– **UK Climate Assembly**

– Randomly chosen citizens reviewed how the UK should reach net-zero emissions

Citizens’ Juries and Panels

Smaller-scale versions used in many countries.

– Used in **Canada, Australia, UK, and the US**

– Citizens are randomly selected to:

– Review policies

– Hear expert testimony

– Make recommendations to governments

Jury Duty (classic lotocracy element)

This is one of the oldest and most widespread examples.

– **Legal juries** in countries like the US, UK, and Canada

– Citizens are randomly selected to decide legal cases

– This is a direct, functioning form of lotocracy in everyday governance

Sortition in Local Government Experiments

Some cities experiment with random selection for advisory roles.

– **Paris**: uses randomly selected citizen panels for urban planning input

– **Madrid**: has experimented with citizen assemblies tied to its “Decide Madrid” platform

– **Belgium (Ostbelgien region)**:

– Permanent citizens’ council partly selected by lot

– One of the most advanced modern uses

Political Theory and Proposals (not fully implemented)

– Some academics and reformers advocate replacing parts of legislatures with randomly selected citizens

– No country currently runs entirely on lotocracy, but hybrid systems are growing

Bottom line

– “Lotocracy” isn’t widely used as a term in everyday language, but the concept is very active

– It shows up most clearly in:

– Citizens’ assemblies

– Juries

– Advisory panels

– These systems are expanding, especially for complex policy issues where public trust matters

A fully lotocratic government would look like this in practice—it’s a pretty interesting model…

A fully lotocratic system replaces elections with **random selection (sortition)** for most governing roles. No country runs entirely this way today, but there are well-developed models showing how it could work in practice.

Core Idea

Instead of voting for politicians, **citizens are randomly selected** (like jury duty) to serve in government for limited terms. The goal is to reduce corruption, career politics, biased influence and polarization.

Structure of a Full Lotocratic Government

Legislative Branch (Law-Making)

**Citizens’ Assembly (main legislature)**

– 300–1,000 citizens randomly selected from the population

– Stratified sampling ensures representation (age, gender, region, etc.)

– Terms typically last 6–18 months

– Members are paid and supported like full-time officials

**How it works:**

– Members hear from experts, stakeholders, and advocates

– Deliberate in small groups and plenary sessions

– Vote on laws after structured discussion

Executive Branch (Administration)

**Selected administrators + expert civil service**

– Key roles (ministers, department heads) can be:

– Randomly selected from qualified pools, or

– Appointed by the assembly and subject to oversight

– Day-to-day governance handled by a professional civil service

**Checks:**

– Assembly can remove or replace officials

– Short terms limit power concentration

Judicial Branch (Justice System)

**Combination of professional judges + citizen panels**

– Judges remain trained legal experts

– Citizens continue to serve on juries (expanded role possible)

– Some oversight boards may also be randomly selected

Selection System (How People Are Chosen)

– Random draw from national registry (like tax or census records)

– Stratified to reflect demographics

– Opt-out allowed but limited to prevent bias

– Participants compensated and legally protected

Safeguards and Design Features

– **Rotation:** frequent turnover prevents entrenched elites

– **Transparency:** deliberations often public or recorded

– **Facilitation:** trained moderators keep discussions productive

– **Balanced information:** structured input from multiple viewpoints

– **Ethics rules:** strict limits on lobbying and outside influence

Example Flow of a Law

1. Issue identified (public petition, expert report, or government need)

2. Subgroup of assembly studies the issue

3. Experts and stakeholders present evidence

4. Deliberation in small groups

5. Draft proposal created

6. Full assembly debates and amends

7. Final vote

8. Implementation by executive branch

Strengths

– Reduces influence of money and campaigning

– More representative of the actual population

– Encourages thoughtful, less partisan decisions

– Limits career politicians and power accumulation

Weaknesses / Challenges

– Participants may lack experience initially

– Requires strong facilitation and education systems

– Public trust can be uncertain without elections

– Scaling to large countries is complex

– Risk of low participation if many opt out

Hybrid Models (More Realistic Today)

Most proposals combine lotocracy with elections:

– Elected parliament + randomly selected second chamber

– Citizens’ assemblies with binding or semi-binding power

– Random selection used for oversight, auditing, or constitutional review

Bottom line

A full lotocratic model treats governance like an expanded form of jury duty—**ordinary people temporarily governing with structured support**. While not yet used at national scale, many of its components are already active and expanding, especially in citizens’ assemblies.

Modern tech makes lotocracy more *feasible***—but not automatically *safe or trusted*. The hard part isn’t just secure random selection; it’s legitimacy, transparency, and resistance to manipulation across the whole process.

Where technology clearly helps

– **Secure random selection**

– Cryptographic randomness (e.g., verifiable random functions) can produce draws that anyone can audit.

– Public “randomness beacons” (like those used in some blockchains) allow third parties to verify the draw wasn’t rigged.

– **Identity and eligibility**

– Digital ID systems can ensure “one person, one entry,” reduce duplicates, and support stratified sampling (age, region, etc.).

– Cross-checking registries (tax, census) helps maintain an accurate pool.

– **Auditability and transparency**

– Publish the selection algorithm and seeds so independent groups can reproduce the draw.

– Tamper-evident logs (e.g., append-only ledgers) make post-hoc manipulation detectable.

– **Operations at scale**

– Automated notifications, scheduling, and compensation.

– Secure portals for participants, document access, and evidence review.

– **Deliberation support**

– Structured briefing systems, AI-assisted summaries, and translation tools.

– Moderation tools that track speaking time, surface minority views, and reduce domination by a few voices.

Where the analogy to banking breaks down

– **Different threat model**

– Banks defend against theft; a lotocratic system must also resist **political capture**, **subtle biasing**, and **perception of unfairness**.

– Even a tiny suspicion of bias can delegitimize outcomes.

– **Input integrity**

– The biggest risk is not the RNG (Random Number Generator) —it’s **who’s in the pool**. Incomplete or biased registries skew results.

– **Coercion and influence**

– Selected citizens can be pressured, lobbied, or targeted with disinformation. That’s a human-layer risk, not just a systems one.

– **Public trust**

– People must believe the process is fair. “Black-box” tech—even if secure—can undermine confidence.

Design choices that make it more robust

– **Open, verifiable selection**

– Publish code, use public randomness sources, allow anyone to re-run the draw.

– Run multiple independent draws (different organizations) and compare outputs.

– **Stratified sampling with clear quotas**

– Pre-commit to demographic targets and publish how they’re enforced.

– **Independent oversight**

– Multi-party governance (courts, electoral bodies, civil society, academia) supervising each step.

– **Participant protections**

– Legal shields, anonymity where appropriate, compensation, and anti-corruption rules.

– **Hybrid decision-making**

– Pair citizen assemblies with elected bodies or referenda to reinforce legitimacy during early adoption.

– **Deliberation quality controls**

– Balanced expert panels, transparent evidence packs, and trained facilitators.

– AI used as **assistive**, with auditable prompts and outputs, not as a hidden decision-maker.

Role of AI specifically

– **Helpful**

– Summarizing large evidence sets, generating balanced briefs, translation, accessibility.

– Detecting coordination or anomalous influence patterns.

– **Risky**

– Bias in summaries, prompt manipulation, deepfakes, and targeted persuasion campaigns.

– Needs strict governance: logging, model audits, and clear boundaries.

Bottom line

Technology can make a lotocratic system **technically secure, scalable, and auditable**—arguably more so than ever. But success hinges on **transparent design, trusted institutions, and human safeguards**. Think of tech as the backbone, not the guarantee; legitimacy comes from how openly and fairly the whole pipeline is run.

Can we sign you up, and count on your support?

University Slum Neighborhood

Leave a comment

I discussed our neighborhood problem with my Bot. This is our exchange:

Assume you are a sociology professor at the University of California Riverside, where currently the growth of the university has surpassed the number of living units for students.  With future plans for massive increases.  So investment realtors are actively buying single-family homes, converting them, (without compliance with building codes), placing students in unsafe living environments, creating traffic problems, and subjecting the neighborhood to unsafe conditions.  Many small houses now have half a dozen cars associated with it, with insufficient parking.  So some of the parking is being done on the sidewalks. They’re reluctant to park on the grass, because the enforcement from the landlord is present; however there’s no enforcement from the city, so parking on sidewalks is preferred. All-way stop signs are numerous; installed by the city to serve as speed control.  They are constantly being ignored; primarily because there’s no enforcement and there’s no valid reason for them in the first place.  So any intelligent immature person just drives right on through.

So how would you modify things in this city?  Let’s say that you have influence in the cities traffic engineering department and that you have input into the universities growth plans.  What would you suggest as a solution to keep this neighborhood as a nice livable area, and not a slum; as several universities on the East Coast have become.  Many of which are now surrounded by slums, partially because of the inattention to this type of problem.  So give us some help here.  I know of a sociology student who might be interested in a PhD thesis, who’s in a location to monitor this situation.  So what kind of help would you suggest?

The situation you describe is a classic “studentification” process—where rapid university expansion outpaces housing supply and reshapes nearby single‑family neighborhoods. Many university towns (Berkeley, Boulder, Madison, Oxford, and several East Coast campuses) faced the same pattern: investor conversion of houses, overcrowding, vehicle saturation, and declining neighborhood stability. When unmanaged, the result is a deteriorating rental enclave; when managed deliberately, it can remain a stable mixed community.

A workable approach requires **simultaneous intervention in housing supply, neighborhood regulation, and transportation behavior**. Treating only one dimension (for example parking enforcement) usually fails because the underlying structural pressures remain.

1. Address the Structural Cause: Student Housing Supply

The central driver is the **mismatch between enrollment and available beds**. If enrollment grows faster than housing, students spill into nearby neighborhoods.

University‑level policy tools:

Enrollment–Housing Ratio Requirement

Tie enrollment growth to a guaranteed housing supply. A common planning target is **0.65–0.75 beds per student** for campuses embedded in residential cities.

On‑Campus or University‑Owned High‑Density Housing

Construct mid‑rise student housing (5–8 stories) near campus edges and transit corridors rather than allowing dispersal into single‑family zones.

Master Leasing Programs

The university leases entire apartment complexes and subleases them to students under campus conduct rules.

Public–Private Partnerships (P3 housing)

Developers build high‑density student housing under university oversight while keeping single‑family areas protected.

Freshman/Sophomore Residency Requirements

Many universities require first‑ and second‑year students to live in campus housing, which stabilizes nearby neighborhoods.

This approach attacks the root cause: **too many students chasing too few beds.**

2. Protect Single‑Family Neighborhoods Through Occupancy Regulation

Cities with strong university pressure often regulate **occupancy rather than ownership**.

Possible municipal tools:

Maximum Occupancy Based on Bedrooms

Example rule used in many college towns:

Occupancy = Bedrooms + 1

A three‑bedroom house would therefore allow **4 residents**, preventing the common practice of placing 6–10 students in a house.

Rental Licensing System

Require landlords to obtain a **rental permit** for each property.

Licensing can require:

– Annual safety inspections

– Verified bedroom counts

– Parking compliance plans

– Emergency contact information

– **Distance Buffer (“Anti‑Clustering”)**

Prevent new student rentals within a certain radius (e.g., **400–500 feet**) of another licensed student rental to preserve mixed occupancy.

Strict Building Code Enforcement

Illegal bedroom conversions (garage, living room, attic) should trigger fines large enough to outweigh the profit.

Cities such as **Boulder and Fort Collins** have used these tools successfully.

3. Parking and Transportation Controls

Parking saturation is a visible symptom but also a powerful regulatory lever.

Neighborhood Parking Permit District

Create a **resident-only permit system**:

– Each property receives **1–2 permits maximum** regardless of number of tenants.

– Additional vehicles must park in paid district lots or campus garages.

This directly discourages overcrowding because tenants cannot store unlimited cars.

Sidewalk Parking Enforcement

Parking on sidewalks is typically prohibited under California vehicle law.

Relevant state law includes:

– **California Vehicle Code §22500(f)** – prohibits stopping or parking on a sidewalk.

– **California Vehicle Code §22500(b)** – prohibits blocking driveways.

Consistent enforcement quickly changes behavior.

Traffic Calming That Cannot Be Ignored

Stop signs alone rarely work without enforcement.

More effective designs include:

– **Raised intersections**

– **Speed tables**

– **Chicanes**

– **Curb extensions**

– **Mini roundabouts**

These create **self‑enforcing speed control** rather than relying on compliance.

4. Align University Incentives with City Outcomes

Many conflicts arise because universities expand enrollment while cities bear the neighborhood impacts.

Several policy mechanisms exist:

University Housing Impact Fee

If enrollment grows faster than campus housing, the university pays into a **city housing mitigation fund** used for infrastructure and enforcement.

– **Joint Planning Agreements**

Formal city–university compacts where enrollment projections trigger housing development obligations.

– **Campus Transportation Requirements**

Require student transit passes and restrict on‑campus parking eligibility.

5. Encourage Purpose‑Built Student Housing Zones

Rather than dispersing students into single‑family neighborhoods, cities often designate **student housing corridors**.

Characteristics:

– Mid‑rise apartments

– Retail on ground floors

– Transit access

– Reduced parking requirements

This creates a **controlled student district** instead of uncontrolled neighborhood conversion.

6. Enforcement Capacity

Rules without enforcement fail quickly.

A typical model includes:

– **Dedicated university neighborhood enforcement team**

– Nighttime parking patrols

– Rental inspection staff

– Complaint hotlines

Many college towns fund this through **rental license fees**.

7. A Strong PhD Research Opportunity

Your student could design a dissertation around **urban studentification and neighborhood resilience**.

Possible research components:

**Research Questions**

– How does enrollment growth correlate with neighborhood housing conversion?

– What policy interventions prevent “slumification” around universities?

– How does vehicle ownership influence neighborhood degradation in student districts?

**Methods**

– GIS mapping of student rental conversions

– Parking density measurements

– Stop‑sign compliance observation

– Surveys of residents vs. student tenants

– Landlord ownership network analysis

**Natural Experiment**

If the city introduces:

– permit parking

– rental licensing

– increased campus housing

the student could track **before/after neighborhood outcomes**.

**Data Sources**

– parcel ownership records

– code enforcement records

– parking citation data

– enrollment growth statistics

– housing permits

8. A Balanced Strategy

The most successful university cities use **three simultaneous levers**:

Final Perspective

Neighborhood decline around universities rarely happens suddenly. It occurs when **three actors pursue independent incentives**:

– universities maximize enrollment

– landlords maximize rent per square foot

– cities delay enforcement

The remedy is coordinated governance where **housing supply, zoning, and transportation policy move together**. When done well, university districts become vibrant mixed communities rather than deteriorating rental zones.

If your student is interested in a dissertation topic, the Riverside situation is actually an excellent **live case study of unmanaged studentification**—a phenomenon urban sociologists increasingly study across North American university towns.

Have you seen situations where the city will stop serving a neighborhood in order to encourage old residence to sell their property to the real estate developers who will serve the interest of the university and the city?

Yes. Situations resembling what you describe have occurred in some cities, although they are rarely stated openly as a policy. In urban sociology and planning literature this pattern is usually discussed under concepts such as **urban growth coalitions**, **planned shrinkage**, **disinvestment**, or **indirect displacement**. The common dynamic is that municipal priorities shift toward redevelopment aligned with powerful institutions—often universities, hospitals, or large developers—while services in adjacent residential areas decline.

https://whorulesamerica.ucsc.edu/local/growth_coalition_theory.html

https://en.wikipedia.org/wiki/Municipal_disinvestment

https://pmc.ncbi.nlm.nih.gov/articles/PMC6927009

Growth Coalitions and Institutional Expansion

Urban sociologists such as **Harvey Molotch** and **John Logan** describe cities as being influenced by “growth coalitions,” alliances between:

– municipal officials

– real estate developers

https://en.wikipedia.org/wiki/Harvey_Molotch

– large institutions (universities, hospitals, stadium authorities)

– financial interests

These coalitions often prioritize **land value growth and redevelopment**. Neighborhoods near expanding universities sometimes become targets for redevelopment because the land is valuable for student housing, research parks, or commercial districts.

In such cases, cities may gradually favor redevelopment projects through:

– zoning changes

– infrastructure investment directed toward redevelopment zones

– tax incentives for developers

– relaxed enforcement of certain conditions that indirectly encourage turnover of property ownership

The effect can create pressure for long‑term residents to sell.

Historical Example: “Planned Shrinkage”

One widely discussed historical case is **New York City in the 1970s**, where reductions in fire protection and municipal services occurred in several neighborhoods experiencing fiscal crisis.

The term **“planned shrinkage”** was used by urban scholars to describe a policy approach where cities concentrated services in economically stronger areas while allowing weaker neighborhoods to decline, expecting population to move elsewhere.

The outcome included:

– property abandonment

– declining maintenance

– accelerated turnover of housing

While not always intentional displacement, the reduced service environment contributed to neighborhood change.

University Expansion Cases

Several university cities have faced controversy over perceived municipal alignment with institutional expansion.

Examples often discussed in planning literature include:

– **Columbia University expansion in West Harlem (New York)**

Redevelopment and rezoning enabled a large campus expansion, accompanied by community concerns about displacement.

– **University of Pennsylvania expansion in West Philadelphia**

Major redevelopment in the 1990s involved partnerships with the city to reshape surrounding neighborhoods.

– **Yale University expansion in New Haven**

Urban renewal policies in the mid‑20th century cleared residential areas for institutional development.

In these cases the mechanism was usually **rezoning, eminent domain, or redevelopment incentives**, rather than explicitly stopping services.

More Subtle Forms of Pressure

Rather than openly withdrawing services, modern cities more often create indirect pressures such as:

– slower infrastructure maintenance

– limited enforcement of housing violations

– rezoning that increases land value for redevelopment

– infrastructure investments that favor institutional districts rather than residential ones

These dynamics can shift a neighborhood from **stable owner occupancy toward investor ownership**.

Important Distinction

Documented cases rarely show explicit written policies stating that a city intends to stop serving a neighborhood to force sales. Instead, what researchers usually observe is:

– **institutional priorities influencing resource allocation**

– **development incentives favoring redevelopment**

– **policy neglect that accelerates neighborhood turnover**

From a sociological perspective, the outcome may resemble deliberate displacement even when the official policy language frames it as **economic development or urban revitalization**.

Relevance to University Towns

University-adjacent neighborhoods are especially vulnerable because:

– universities often expand steadily over decades

– students create strong rental demand

– developers can profit from high-density student housing

– municipalities benefit from increased property values and tax revenue

Without explicit neighborhood protection policies—such as occupancy limits, rental licensing, or preservation zoning—the market tends to convert single-family neighborhoods into high‑turnover student districts.

Research Angle for Your Student

Your student could investigate whether municipal behavior reflects **institutional alignment** by analyzing:

https://www.csus.edu/experience/community-engaged-university

– budget allocation patterns for infrastructure and enforcement

– code enforcement frequency by neighborhood

– property ownership turnover rates

– zoning amendments near the university

– political campaign contributions from development interests

Mapping those variables over time often reveals whether neighborhood change results primarily from **market pressure** or **policy-driven redevelopment priorities**.

In short, cities rarely admit to withdrawing services to push residents out, but urban research shows that **disinvestment and redevelopment incentives can produce similar outcomes**, especially in neighborhoods adjacent to expanding universities.

STOP sign compliance

Leave a comment

I just met a Sociology Major living on a corner where 3 Stop Signs are routinely violated.

A wonderful Masters or even PHD thesis could be accomplish from the front yard:

Collect Current data – Change something (Drive like your ___ lived here) signs – Collect Modified data – …

So I discussed this with my Bot and had the following exchange:

I live in a university neighborhood. With a lot of young people.  For some reason, the city has put in a lot of unnecessary stop signs; possibly to control the speed of vehicles.  The streets are sometimes used as an alternatives to the freeways.  

In any case; very few people actually stop, most people use them as yield signs, some people just drive straight on through them.  I’m wondering if there’s been a sociological study about this. Since you are a sociology professor, perhaps you could point us to these studies.  Perhaps even help investigating the reasons that people give for disobeying simple traffic laws.

Many sociological and behavioral studies have examined **why people ignore simple traffic laws** such as stop signs. Research consistently shows that drivers rarely think of these violations as “breaking the law.” Instead, they interpret them as practical decisions based on risk, time pressure, and social norms.

Key Sociological Explanations

1. Perceived Low Risk

Drivers often evaluate whether stopping is necessary based on immediate conditions.

– If no other cars or pedestrians are visible, many drivers treat stop signs as yield signs.

– People estimate the **risk of collision and the risk of enforcement**, and if both seem low they feel comfortable rolling through.

Research in behavioral traffic safety shows that **perceived enforcement likelihood** strongly influences compliance. When drivers believe police monitoring is unlikely, violations increase.

2. Informal Social Norms

In many neighborhoods a shared behavior develops where most drivers roll through stop signs.

– Drivers observe others not stopping.

– The behavior becomes normalized.

– People follow the **local driving culture** rather than the written rule.

Sociologists describe this as **normative conformity**: individuals follow the behavior that appears typical in their environment.

3. “Over‑regulation” Perception

When drivers think traffic controls are unnecessary, compliance drops.

– If intersections have clear visibility and low traffic, drivers may view stop signs as excessive.

– Studies on “traffic control credibility” show that **drivers obey rules they perceive as reasonable** and ignore those they see as bureaucratic or unnecessary.

This is common in residential areas where stop signs are installed mainly for speed control.

4. Habit and Cognitive Shortcuts

Driving relies heavily on automatic behavior.

– Experienced drivers develop routines.

– Rolling stops can become habitual if there are no consequences.

Psychologists call this **automaticity**—people stop consciously evaluating every rule once driving becomes routine.

5. Time Pressure and Efficiency

Even small delays accumulate in drivers’ minds.

– Rolling through saves a few seconds.

– Drivers often prioritize efficiency over strict compliance.

This connects to the theory of **rational choice**, where people balance perceived cost vs. benefit.

Notable Research Areas and Studies

Several research traditions address this behavior.

**Traffic Psychology and Risk Compensation**

– Gerald Wilde’s *risk homeostasis theory* argues drivers maintain a personal acceptable risk level.

**Social Norms in Traffic Behavior**

**Traffic Control Credibility Research**

– Transportation engineering studies have shown that **overuse of stop signs leads to widespread noncompliance**.

– U.S. traffic engineering manuals (such as the MUTCD) explicitly warn against installing stop signs solely for speed control because drivers tend to ignore them.  https://mutcd.fhwa.dot.gov/

**Behavioral Compliance Studies**

Observational studies at intersections often find:

https://www.researchgate.net/scientific-contributions/Paulina-Reina-2163496004

– Full stops are made by a minority of drivers.

– “Rolling stops” are the most common behavior.

– Compliance increases when pedestrians are present or enforcement is visible.

Why Universities and Residential Areas See This More Often

University neighborhoods often combine several factors that increase violations:

– Many **four‑way stop installations** for speed control

– Low traffic volumes at many intersections

– High familiarity with the streets among residents

– Low perceived enforcement

These conditions make drivers feel safe treating stop signs informally.

Bottom Line

Sociological research suggests that most drivers do not consciously “choose to break the law.” Instead they respond to:

– perceived risk

– local social norms

– perceived reasonableness of the rule

– habit and efficiency

When a rule feels unnecessary and everyone else appears to ignore it, compliance tends to collapse. That dynamic is one of the clearest examples of how **informal social behavior can override formal regulation** in everyday life.

I wonder; if this behavior is reinforced in a young person’s life; when they’re becoming an adult.  If it results in anarchical behavior or law violations in general, in their later life.

Research in sociology, criminology, and psychology suggests that **routine minor rule‑breaking (like rolling through stop signs)** does not automatically lead to broader lawlessness later in life. However, studies do show that repeated exposure to environments where rules are widely ignored can influence how people think about authority, norms, and compliance.

Social Learning and Norm Formation

One major explanation comes from **social learning theory**. This theory proposes that people learn acceptable behavior by observing what others do and what appears to be tolerated.

https://en.wikipedia.org/wiki/Social_learning_theory

Key mechanisms include:

– **Observation of peers**: If most drivers in a community treat stop signs as optional, new drivers learn that behavior quickly.

– **Lack of consequences**: When rule violations rarely lead to punishment, the behavior becomes normalized.

– **Group conformity**: People tend to follow local norms to avoid standing out.

Research on everyday rule violations shows that compliance is strongly shaped by **perceived social norms**, often more than by formal laws.

https://www.jstor.org/stable/20464374

The “Broken Windows” Perspective

Some criminological research links small visible violations with broader patterns of rule-breaking. The **broken windows theory** argues that when minor violations appear common and unchallenged, people may infer that social control is weak.

https://en.wikipedia.org/wiki/Broken_windows_theory

Examples studied include:

– fare evasion in public transit

– vandalism in public spaces

– traffic violations

However, later research has produced **mixed evidence**. Many studies find that minor disorder can signal weak enforcement but does not necessarily cause serious criminal behavior.

Developmental Research on Youth and Rule Compliance

Studies of adolescents and young adults show that attitudes toward rules form during this stage through:

– peer influence

– perceived fairness of rules

– institutional trust (police, government, authorities)

Young adults who see rules as **arbitrary or poorly justified** are more likely to selectively comply with them. However, most still maintain strong distinctions between minor regulatory violations (traffic rules, parking) and serious wrongdoing.

https://lawshun.com/article/what-is-the-definition-of-arbitrary-law

Traffic Behavior and Later Lawbreaking

Long‑term studies of drivers generally show:

– Minor traffic violations are **very common across all demographics**.

– Most people who commit these violations **do not escalate into broader criminal behavior**.

– A small subset of individuals who repeatedly engage in **risk‑taking behaviors** (dangerous driving, aggressive driving, substance‑impaired driving) are more likely to exhibit other forms of rule‑breaking.

This pattern aligns with research on **general risk‑taking personality traits**, rather than the influence of a single type of minor violation.

https://www.psychologytoday.com/us/articles/200011/are-you-a-risk-taker

University Environments

University neighborhoods often show high rates of informal rule adaptation because of:

– dense pedestrian activity

– frequent four‑way stops installed for traffic calming

– large populations of young drivers still developing driving habits

Students may adopt local driving norms quickly, but studies of civic attitudes show that most people **increase rule compliance with age**, particularly as responsibilities such as work, family, and legal risk become more salient.

Overall Findings

Research generally indicates:

– Minor traffic rule violations reflect **situational norms and risk perception**, not a broader rejection of law.

– Social environments can influence how seriously people take certain rules.

– Most individuals maintain strong internal boundaries between **minor convenience violations and serious illegal acts**.

In short, rolling through stop signs usually reflects **local behavioral norms and perceived practicality**, not the development of long‑term anarchic attitudes toward law.

How best can a community increase the safety of their neighborhood by controlling this kind of behavior?

Communities facing widespread stop‑sign violations typically improve safety by combining **street design changes, credible traffic controls, and social enforcement** rather than relying only on signs or policing. Research in transportation engineering and behavioral sociology consistently shows that drivers follow rules most reliably when the **physical environment makes the safe behavior feel natural or unavoidable**.

Improve Street Design (Most Effective)

Traffic safety studies show that **self‑enforcing street design** produces the highest compliance because drivers physically cannot maintain high speeds.

Common measures include:

– **Speed humps or speed tables** that force drivers to slow down.

– **Raised intersections** that function like large speed bumps at crossings.

– **Curb extensions (bulb‑outs)** that narrow intersections and shorten pedestrian crossing distance.

– **Chicanes or lane shifts** that break up long straight streets used as cut‑through routes.

– **Mini roundabouts** replacing four‑way stops at low‑volume intersections.

These changes work because drivers respond instinctively to road geometry rather than relying on voluntary compliance.

Use Credible Traffic Controls

Drivers are more likely to obey controls that appear necessary and logical.

Approaches include:

– **Removing unnecessary stop signs** and replacing them with yield signs where appropriate.

– **Installing all‑way stops only where traffic volume or visibility warrants them.**

– **Using flashing beacons or marked pedestrian crossings** in areas with heavy foot traffic.

Traffic engineers refer to this as maintaining **“control credibility.”** When signs match the actual risk, compliance improves.

https://en.wikipedia.org/wiki/Road_traffic_control

Reduce Cut‑Through Traffic

Neighborhood streets often become informal bypass routes when nearby highways are congested.

Possible interventions:

– **Partial street closures or diverters** that prevent through‑traffic but allow local access.

– **Turn restrictions during peak hours**.

– **Neighborhood traffic circles** that slow vehicles while allowing movement.

These measures reduce the number of non‑resident drivers who are unfamiliar with the area and more likely to ignore stop signs.

Increase Visibility and Awareness

Behavior often changes when drivers feel observed or reminded of community presence.

Examples include:

– **Neighborhood speed feedback signs** displaying vehicle speed.

– **Highly visible crosswalk markings** and stop bars.

– **Community signage such as “Drive Like Your Kids Live Here.”**

– **Street lighting improvements** at intersections.

These cues activate social norms and heighten risk perception.

https://www.homedepot.com/p/18-in-x-24-in-Drive-Like-Your-Kids-Live-Here-Sign-840051/206740790

Targeted Enforcement

Limited but visible enforcement can shift behavior when used strategically.

Effective strategies include:

– **Short-term police enforcement campaigns** at known problem intersections.

– **Automated speed cameras where permitted.**

– **Community reporting programs** that track traffic concerns.

Even brief enforcement waves can reset expectations about compliance.

Community Engagement

Safety improvements work best when residents and institutions (such as universities) participate.

Actions communities often take:

– **Neighborhood traffic studies conducted with local government.**

– **University partnerships encouraging students to respect residential streets.**

– **Resident traffic committees that identify priority locations.**

Shared ownership of the problem increases long‑term adherence to safety measures.

Key Insight from Traffic Research

Studies in urban planning consistently show that **drivers respond more to street design than to rules alone**. Signs ask drivers to behave safely; well‑designed streets **make unsafe behavior difficult or uncomfortable**.

https://tsr.international

https://www.ots.ca.gov/media-and-research/data-and-statistics

https://link.springer.com/book/10.1007/978-3-031-85751-5

Communities that combine **traffic calming, credible signage, reduced cut‑through traffic, and visible enforcement** typically see the largest and most lasting reductions in stop‑sign violations and neighborhood speeding.

Understanding Recycling

Leave a comment

What follows is an interchange of information between me and chatGPT-4.  There are a number of reasons that I publish this.  First, is the subject matter.  Everyone needs more knowledge, skills and maturity.  Thus our culture’s commitment to Educating the next generation (as well as the older ones).  Secondly, to demonstrate the conversational type (chat) exchange, in acquiring data and requesting first drafts.  Thirdly, to assist teachers in drawing a line, when the culture won’t.  The culture is stuck in the era where consumption is valued over production or conservation…   Forthly

I started the exchange with…

Pretend you are a professor of environmental studies.  A student comes to you with a question about recycling.  The student has made the observation, that while the culture and the political posture in general is that people should care for the planet and do as much recycling as possible.  The fact of the matter is that the recycling is offloaded to companies that collect the trash and supposedly do the recycling.  However their recycling is economically driven and if there is no economical profit to be made in the recycling they just dump it into the ocean or whatever.  Is this a valid observation, and if so is there anything that you can think of, that we can do to improve our recycling?  Thank you.

ChatGPT responded with…

That is an excellent and perceptive question—one that goes to the heart of the gap between environmental ideals and the economic systems that underpin how recycling actually functions. Your observation is largely valid: while recycling is promoted as a civic virtue, the system that manages it is primarily market-driven. When the economics of recycling collapse, materials often end up in landfills or, in some cases, exported to countries with weaker environmental oversight.

The Reality Behind Modern Recycling

1. The Market Dependency of Recycling

Recycling in most of the United States operates within a ‘commodities market’. Municipalities contract private waste-management companies to collect recyclables, which are then sold to processors. The viability of recycling depends on the ‘market price’ for materials such as aluminum, paper, and certain plastics.

When global oil prices are low, virgin plastic becomes cheaper than recycled plastic. Similarly, when demand for mixed paper or scrap metal drops, recyclers have little incentive to process them. After China’s “2018 “National Sword” policy” banned imports of most foreign recyclables, global markets were flooded with low-value materials that domestic systems could not handle. As a result, many U.S. municipalities curtailed their recycling programs or quietly diverted recyclables to landfills.

2. The Illusion of the “Blue Bin”

The public perception that all recyclables are responsibly processed is misleading. Studies by the U.S. Environmental Protection Agency (EPA) and the Ellen MacArthur Foundation show that less than “10% of global plastic waste” is actually recycled. The rest is either incinerated, landfilled, or leaked into the environment.

This discrepancy arises partly from “wishcycling”—placing non-recyclable items into bins—and partly from the contamination of recyclable streams, which makes materials economically unviable to process. The result is a system that looks participatory but functions inefficiently.

3. Recycling as a Privatized Responsibility

The current model externalizes environmental responsibility from producers to consumers and municipalities. Corporations that produce single-use packaging often avoid accountability for the end-of-life management of their products. This privatization of responsibility allows companies to market themselves as sustainable while passing the real costs to the public.

Why the Observation Is Valid

Your observation that recycling follows economic logic rather than ecological necessity is correct. Recycling is not inherently profitable; it only works when markets align with environmental goals. When they do not, materials are discarded. This is not merely a failure of individual behavior but a structural flaw in how recycling systems are designed and financed.

Strategies for Improvement

1. Extended Producer Responsibility (EPR)

EPR laws require manufacturers to take responsibility for the full lifecycle of their products, including collection and recycling. This shifts costs from municipalities to producers and incentivizes design for recyclability.

– “Example:” The European Union’s Packaging and Packaging Waste Directive mandates that producers finance recycling infrastructure.

– “Action:” Support state or federal legislation that enforces producer accountability for packaging waste.

2. Domestic Recycling Infrastructure

The U.S. lacks sufficient domestic facilities to process recyclables. Investment in regional material recovery facilities (MRFs) and advanced sorting technologies can reduce reliance on exports.

– “Action:” Advocate for public–private partnerships and federal funding (e.g., through the EPA’s ‘Recycling Infrastructure and Market Development Grant Program’).

– “Goal:” Build a closed-loop system that keeps materials within the domestic economy.

3. Standardization and Transparency

Recycling rules vary widely between municipalities, causing confusion and contamination. A standardized labeling system—such as the ‘How2Recycle’ label—can improve sorting accuracy.

– “Action:” Encourage local governments to adopt uniform recycling guidelines and require transparency from waste contractors about material destinations.

4. Circular Economy Design

The long-term solution lies in redesigning products and supply chains to minimize waste altogether. A ‘circular economy’ emphasizes reuse, repair, and remanufacturing rather than disposal.

– “Action:” Support policies that promote refillable packaging, deposit-return systems, and material innovation.

– “Goal:” Reduce dependency on recycling by preventing waste generation at the source.

5. Consumer and Institutional Leverage

While systemic reform is necessary, individuals and institutions can still exert pressure through procurement choices. Universities, for instance, can adopt zero-waste purchasing policies that prioritize suppliers using post-consumer materials.

– “Action:” Choose products with minimal or recyclable packaging and support local repair or reuse initiatives.

– “Goal:” Align purchasing power with sustainability values.

Summary

Your critique captures a fundamental truth: recycling, as currently practiced, is more a ‘market mechanism’ than an ‘environmental solution’. When profit disappears, so does the incentive to recycle. To make recycling genuinely effective, we must move beyond consumer good intentions toward structural reform—holding producers accountable, investing in domestic capacity, and redesigning materials for circularity.

In short, recycling should not depend on whether it is profitable; it should be treated as a public good—an essential part of environmental stewardship and resource management. Only by aligning economic systems with ecological realities can the promise of recycling match its cultural ideal.

I continued the exchange with…

Is it possible to transfer the responsibility of responsible use of materials to the producer and not allow products to reach the market that have mixed media making it difficult to recycle and is there any evidence that other countries may have a better handle on this, or perhaps even; that in the past there were better approaches to recycling?  Thank you.

ChatGPT responded with…

That is an exceptionally thoughtful question—and it touches on one of the most transformative ideas in environmental policy today: ‘shifting responsibility for materials management from consumers to producers’. This concept, known as “Extended Producer Responsibility (EPR)” or “Product Stewardship”, is not only possible but already being implemented with measurable success in several countries. It also has historical precedents that show how societies once handled materials more sustainably before the rise of disposable consumer culture.

Transferring Responsibility to Producers

1. The Principle of Extended Producer Responsibility (EPR)

EPR is a policy framework that makes producers financially and operationally responsible for the post-consumer stage of their products. Instead of municipalities or consumers bearing the cost of waste management, producers must design, collect, and recycle their products—or pay into systems that do.

The central goals of EPR are:

– To “incentivize eco-design”, encouraging companies to use single-material packaging, reduce toxins, and simplify recycling.

– To “internalize environmental costs”, so that waste management is reflected in the product’s price rather than externalized to the public.

– To “create closed-loop systems”, where materials are recovered and reused rather than discarded.

2. Design Controls: Preventing Mixed-Media Products

Under EPR, regulators can restrict products that combine materials in ways that make recycling impractical—for example, plastic–metal laminates or multi-layer cartons. The European Union’s “Ecodesign Directive” and “Packaging and Packaging Waste Regulation (2023)” require that packaging be ‘recyclable by design’, meaning it must be separable into recoverable components using existing technology.

Such design restrictions are technically and legally feasible. They align with the broader concept of a “circular economy”, in which waste is treated as a design flaw rather than an inevitable byproduct.

International Examples of Producer Responsibility

These systems demonstrate that when producers are responsible for the end-of-life of their products, design innovation follows. For instance, European beverage companies have redesigned bottles to eliminate colored plastics and non-recyclable caps, directly responding to EPR cost structures.

Country/RegionPolicy FrameworkKey Features Results 
“European Union”EPR under the Waste Framework Directive’ Mandatory producer funding for packaging, electronics, and vehiclesPackaging recycling rates above 65%; strong eco-design incentives 
“Germany”Green Dot Program (Der Grüne Punkt)’ Producers pay fees based on packaging recyclability and weight Pioneered EPR globally; over 70% packaging recovery 
“Japan”Containers and Packaging Recycling Law (1995)’Producers collect and recycle packaging; strict material separationHigh-quality recycling streams; strong consumer compliance
“South Korea”Volume-Based Waste Fee + EPR’Producers pay recycling fees; citizens pay for non-recyclable waste Recycling rate above 80%; rapid reduction in mixed-material packaging 
“Canada (British Columbia)”Recycling Regulation (2004)’Full producer funding for residential recyclingTransparent reporting; improved material recovery rates 

Lessons from the Past: When Recycling Worked Better

1. Pre-Disposable Economy (Before 1950s)

Before the rise of mass-produced plastics and single-use packaging, material reuse was embedded in everyday life.

– “Glass bottles” were collected, sterilized, and refilled locally.

– “Metal containers” and textiles were repaired or repurposed.

– “Paper and organic waste” were recovered by local industries or composted.

This system functioned not through formal recycling programs but through ‘economic circularity’—materials had intrinsic value, and waste was minimal. The shift toward disposability in the mid-20th century—driven by petrochemical innovation and consumer convenience—broke this loop.

2. Wartime Resource Recovery (1940s)

During World War II, many nations implemented large-scale material recovery programs. Citizens collected metals, rubber, and paper for reuse in manufacturing. These efforts demonstrated that with coordinated policy and social mobilization, recycling can achieve near-total participation and efficiency.

How the U.S. Could Implement Producer Responsibility

1. Federal or State-Level EPR Legislation

Several U.S. states—Maine, Oregon, California, and Colorado—have recently passed EPR laws for packaging. These laws require producers to fund collection and recycling systems and to report on material composition. Expanding these frameworks nationally could transform the recycling landscape.

2. Eco-Design Standards

Regulators can mandate that packaging be recyclable, compostable, or reusable by design. The U.S. EPA and Federal Trade Commission could align labeling standards with recyclability criteria to prevent “greenwashing.”

3. Deposit-Return Systems (DRS)

Bottle deposit programs, already successful in states like Michigan and Oregon, achieve return rates above 85%. Expanding DRS to more materials (e.g., aluminum, PET, glass) would create a reliable stream of high-quality recyclables.

4. Transparent Material Accounting

Requiring producers to disclose the recyclability and composition of their packaging would allow consumers and regulators to track progress. This transparency is essential for accountability and market transformation.

Summary

Yes—transferring responsibility for material use and recycling to producers is both possible and demonstrably effective. Other countries have shown that when manufacturers bear the cost of waste, they innovate toward simpler, more recyclable designs. Historically, societies that valued materials intrinsically—before the disposable age—achieved far higher levels of reuse and circularity.

The path forward involves combining ‘the wisdom of the past’ (durability, repair, local reuse) with ‘the regulatory sophistication of the present’ (EPR, eco-design, transparency). In doing so, recycling becomes not a moral burden on individuals but a structural feature of a sustainable economy—one where waste is no longer an afterthought, but a design decision.

Understanding SLURS

Leave a comment

What follows is an interchange of information between me and chatGPT-4.  There are a number of reasons that I publish this.  First, is the subject matter.  Everyone needs more knowledge, skills and maturity.  Thus our culture’s commitment to Educating the next generation (as well as the older ones).  Secondly, to demonstrate the conversational type (chat) exchange, in acquiring data and requesting first drafts.  Thirdly, to assist teachers in drawing a line, when the culture won’t.  The culture is stuck in the era where colonization needed to establish order and control, so indigenous peoples were belittled and worse…   Forthly

I initiated the exchange…

Pretend you are a professor of social psychology with a particular interest in ethnic groups. A student comes to you desiring help, on a thesis, related to the use of ethnic slurs in spoken exchanges; particularly if it’s from a person who is a member of the group that’s being slurred.  It appears to the student that when that happens. That person has accepted the degrading implications of the slur and is no longer avoiding using it.  What would you suggest to the student for that type of an inquiry.

ChatGPT reply…

That is a sophisticated and important line of inquiry, and it touches on several key debates in social psychology, sociolinguistics, and identity studies. The use of ethnic slurs *within* an in-group—by members of the group being referenced—has been studied extensively, and your observation about possible internalization of degrading implications is one of several theoretical interpretations. However, it’s essential to recognize that the meaning and function of slur usage are highly context-dependent and can vary dramatically across speakers, audiences, and settings.

Conceptual Frameworks

  1. Reappropriation and Linguistic Resistance

https://en.wikipedia.org/wiki/Reappropriation

One major framework, views in-group use of slurs as ‘reappropriation’—a process by which a stigmatized group takes control of a derogatory term and redefines its meaning. This can serve as a form of empowerment or solidarity, transforming a symbol of oppression into one of shared identity. For example, sociolinguistic research shows that when slurs are used among group members in informal contexts, they can signal familiarity, authenticity, or even affection rather than degradation.

2. Internalized Oppression and Self-Devaluation

https://en.wikipedia.org/wiki/Internalized_oppression

https://www.jstor.org/stable/44076533

Your interpretation aligns with another theoretical lens: ‘internalized racism’ or ‘internalized oppression’. According to this view, repeated exposure to societal prejudice can lead individuals to unconsciously adopt negative stereotypes about their own group. When a person uses a slur self-referentially, it might reflect acceptance of the dominant group’s derogatory framing. This interpretation draws from classic works on symbolic interactionism and social identity theory, where identity is shaped through both self-perception and others’ evaluations.

3. Contextual Pragmatics and Code-Switching

https://academic.oup.com/fmls/article/60/1/19/7640021?login=false

A third approach focuses on ‘contextual pragmatics’—how meaning shifts depending on who is speaking, to whom, and in what setting. Within-group use of slurs can be an act of ‘code-switching’, where language is strategically adapted to fit social norms within a particular community. The same word that is offensive in one context may be neutral or even positive in another. This underscores that meaning is not fixed in the word itself but negotiated socially.

Psychological Mechanisms

https://en.wikipedia.org/wiki/Cognitive_dissonance

https://pubmed.ncbi.nlm.nih.gov/9705803/ 

  • Individuals may use slurs as a way to reconcile conflicting feelings about their identity and societal prejudice.

– “Humor and Desensitization:” Humor can serve as a coping mechanism; using slurs jokingly may reduce their emotional sting, though it can also perpetuate normalization.

Research Implications for a Thesis

If your student is developing a thesis, they might consider:

  • Conducting “discourse analysis”

https://en.wikipedia.org/wiki/Discourse_analysis

  • of in-group versus out-group usage across contexts (e.g., online forums, music, or interpersonal communication).

– Exploring “attitudinal surveys” to measure whether in-group slur use correlates with internalized bias or with empowerment.

– Examining “generational and subcultural differences”, since younger speakers may use such language differently than older ones.

A mixed-methods approach—combining qualitative interviews with quantitative measures of identity and self-esteem—often yields the richest insights.

In summary 

While your hypothesis about internalization is supported by some psychological models, it represents only one possible interpretation among several. In-group slur usage can simultaneously signal resistance, solidarity, or internalized stigma, depending on context and intent. A nuanced thesis would benefit from exploring these competing explanations and the conditions under which each is most applicable.

My response…

Please repeat what you just said but embellish it with references and sources of the information. please. thank you.

ChatGPT response…

That’s an excellent refinement of your request. Below is a more detailed, academically grounded version of the same explanation, now expanded with references to major theorists and empirical studies in social psychology, sociolinguistics, and critical race theory. This version is written in the style of a graduate-level research consultation.

The phenomenon of in-group use of ethnic slurs presents a complex intersection of language, identity, and power. Your observation—that such use may reflect an internalization of degrading implications—is one legitimate interpretation, but it exists alongside alternative frameworks that emphasize reappropriation, contextual pragmatics, and identity negotiation. The meaning of a slur is never static; it is socially constructed and contextually dependent.

Theoretical Frameworks

1. Reappropriation and Linguistic Resistance

https://en.wikipedia.org/wiki/Reappropriation

Reappropriation refers to the process by which marginalized groups reclaim derogatory labels and invest them with new, often empowering meanings. This concept is rooted in sociolinguistic and cultural theory. Allan and Burridge (2006) describe this as “semantic inversion,” where a term of abuse becomes a badge of pride. Similarly, Galinsky et al. (2013) demonstrated experimentally that self-labeling with a derogatory term can reduce its perceived offensiveness and increase feelings of empowerment among group members (*Psychological Science*, 24[10], 2020–2029).

Judith Butler’s (1997) *Excitable Speech: A Politics of the Performative* provides a philosophical foundation for this idea, arguing that language can be re-signified through repeated performative acts. In this view, when an in-group speaker uses a slur, they may be engaging in a subversive act that undermines the term’s original derogatory power.

2. Internalized Oppression and Self-Devaluation

https://en.wikipedia.org/wiki/Internalized_oppression

https://www.jstor.org/stable/44076533

Your interpretation aligns with the concept of *internalized racism* or *internalized oppression*, where individuals adopt society’s negative stereotypes about their own group. This idea has roots in the work of Frantz Fanon (1952) in *Black Skin, White Masks*, who described how colonized subjects internalize the colonizer’s gaze and language. Similarly, Pyke (2010) defines internalized racism as “the internalization of racial oppression by the racially subordinated” (*Sociology Compass*, 4[4], 238–255).

From a social psychological perspective, Tajfel and Turner’s (1979) *Social Identity Theory* suggests that individuals derive part of their self-concept from group membership. When that group is stigmatized, members may experience identity threat, leading to compensatory strategies—either revaluing the group positively (as in reappropriation) or accepting negative stereotypes (as in internalization). The latter could manifest linguistically through self-directed use of slurs.

3. Contextual Pragmatics and Code-Switching

https://academic.oup.com/fmls/article/60/1/19/7640021?login=false

Pragmatic and sociolinguistic theories emphasize that meaning is co-constructed in context. Erving Goffman’s (1959) *The Presentation of Self in Everyday Life* and later work on frame analysis (1974) illustrate how language use depends on situational framing. Within-group slur use can function as *code-switching*—a deliberate shift in linguistic register to signal solidarity, authenticity, or humor (Bucholtz & Hall, 2005, *Journal of Linguistic Anthropology*, 15[1], 58–80).

Mary Bucholtz (1999) and Geneva Smitherman (1977, *Talkin and Testifyin*) both show how African American Vernacular English (AAVE) incorporates reclaimed or contextually redefined terms that carry different meanings depending on speaker and audience. Thus, a slur uttered among peers may serve as an in-group marker of shared experience, while the same word from an outsider remains deeply offensive.

Psychological Mechanisms

– **Social Identity Maintenance:** Reappropriated slurs can reinforce in-group cohesion and shared history (Branscombe et al., 1999, *Advances in Experimental Social Psychology*, 31, 1–40).     

https://pubmed.ncbi.nlm.nih.gov/9705803/ 

  • **Cognitive Dissonance Reduction:** Festinger’s (1957) theory suggests that using a slur self-referentially may help reduce tension between self-concept and societal stigma.

https://en.wikipedia.org/wiki/Cognitive_dissonance

– **Humor and Desensitization:** Billig (2005) notes that humor can serve both as a coping mechanism and as a means of normalizing prejudice (*Laughter and Ridicule: Towards a Social Critique of Humour*).

Research Implications for a Thesis

A rigorous thesis could explore these dynamics through several methodological approaches:

  • **Discourse Analysis:** Examine how slurs are used in in-group versus out-group contexts (e.g., online communities, music lyrics, or conversational transcripts). See van Dijk (1992) on discourse and racism.      
  •     https://en.wikipedia.org/wiki/Discourse_analysis

– **Survey Research:** Measure correlations between in-group slur use and indicators of internalized racism or group pride (adapted from Luhtanen & Crocker’s [1992] *Collective Self-Esteem Scale*).

– **Interviews or Ethnography:** Capture participants’ own interpretations of their language use, following the qualitative traditions of ethnomethodology and symbolic interactionism (Blumer, 1969).

A mixed-methods design would allow triangulation—quantitative data to assess psychological correlates and qualitative data to interpret lived experience.

Summary

In-group use of ethnic slurs cannot be reduced to a single meaning. It may reflect internalized oppression, reappropriative empowerment, contextual signaling, or combinations thereof. The key lies in analyzing *who* is speaking, *to whom*, *in what context*, and *for what purpose*. By situating your thesis within these theoretical traditions—Butler’s performativity, Tajfel’s social identity theory, and Fanon’s decolonial psychology—you can produce a nuanced, empirically grounded account of how language both reflects and reshapes social hierarchies.

Key References (for further reading):

– Allan, K., & Burridge, K. (2006). *Forbidden Words: Taboo and the Censoring of Language.* Cambridge University Press.

– Billig, M. (2005). *Laughter and Ridicule: Towards a Social Critique of Humour.* Sage.

– Bucholtz, M., & Hall, K. (2005). “Identity and Interaction: A Sociocultural Linguistic Approach.” *Journal of Linguistic Anthropology*, 15(1), 58–80.

– Butler, J. (1997). *Excitable Speech: A Politics of the Performative.* Routledge.

– Fanon, F. (1952). *Black Skin, White Masks.* Grove Press.

– Galinsky, A. D., et al. (2013). “The Reappropriation of Stigmatizing Labels.” *Psychological Science*, 24(10), 2020–2029.

– Pyke, K. (2010). “What is Internalized Racial Oppression and Why Don’t We Study It?” *Sociology Compass*, 4(4), 238–255.

– Tajfel, H., & Turner, J. C. (1979). “An Integrative Theory of Intergroup Conflict.” In *The Social Psychology of Intergroup Relations.* Brooks/Cole.

– van Dijk, T. A. (1992). *Discourse and the Denial of Racism.* Discourse & Society, 3(1), 87–118.

In sum

Your student’s intuition about internalization is well-founded but represents only one dimension of a broader sociopsychological phenomenon. A thesis that situates this within the interplay of reappropriation, identity, and power would make a strong and original contribution to the field.

My next response…

I wonder about slur appearances in history and the resulting social interactions that may have resulted from simple slurs.  What do you think of that?

That is a deeply insightful question—one that invites historical, linguistic, and psychological analysis. The history of slur usage reveals how language both reflects and enforces systems of power, and how the social consequences of slurs have shifted over time as societies have redefined norms around identity, hierarchy, and belonging. Examining slurs historically allows us to see them not as isolated insults, but as tools of social regulation and resistance.

Historical Emergence of Slurs

1. Colonial and Racial Hierarchies

Many ethnic slurs emerged during periods of colonization and slavery, when dominant groups sought to linguistically codify difference and inferiority. For example, in the 17th–19th centuries, European colonial powers used racialized labels to justify domination and exploitation. Linguist Deborah Cameron (1995) notes that such terms were “performative acts of classification,” reinforcing ideological hierarchies through everyday speech (*Verbal Hygiene*, Routledge).

Frantz Fanon (1952) observed that colonial language functioned as a psychological weapon: by naming the colonized subject in derogatory terms, colonizers produced internalized inferiority and dependence (*Black Skin, White Masks*). Thus, slurs were not only insults but mechanisms of social control.

2. Immigration and Ethnic Boundaries

In the 19th and early 20th centuries, waves of immigration to Western countries produced new ethnic slurs that served to mark outsiders—Irish, Italian, Jewish, Chinese, and other immigrant groups were assigned labels that condensed economic, religious, and racial anxieties into linguistic shorthand. As historian Noel Ignatiev (1995) shows in *How the Irish Became White*, derogatory labels helped define who was “American” and who was not, reinforcing the social construction of whiteness.

3. Gendered and Intersectional Slurs

By the 20th century, slurs increasingly reflected intersections of race, gender, and sexuality. Feminist scholars such as Judith Butler (1997) and Sara Ahmed (2004) have argued that slurs targeting women or LGBTQ+ people often functioned as disciplinary speech acts—punishing deviation from normative gender or sexual behavior. These words policed social boundaries as much as they expressed prejudice.

Social Interactions and Consequences

1. Slurs as Social Boundary Markers

Historically, slurs have delineated in-group and out-group membership. Sociologist Pierre Bourdieu (1991) described language as a form of *symbolic power*: the authority to define what is legitimate speech and who has the right to speak. When a slur is uttered, it not only insults but also reasserts a social hierarchy—reminding the target of their “place” within it.

2. Conflict and Resistance

Slurs have often provoked collective resistance. The Civil Rights Movement, feminist movements, and LGBTQ+ activism all included efforts to challenge or reclaim slurs. For instance, the reclamation of the word “queer” in the late 20th century represents a deliberate inversion of stigma into solidarity (Jagose, 1996, *Queer Theory: An Introduction*). Similarly, reappropriation of racial slurs in music or art can be read as acts of cultural defiance, though their reception remains contested.

3. Legal and Institutional Responses

Over time, the social consequences of slur usage have extended into legal and institutional realms. Hate speech laws in countries such as Canada, the UK, and Germany recognize slurs as potential incitements to discrimination or violence. In the United States, while the First Amendment protects most speech, courts have acknowledged that slurs can create *hostile environments* in workplaces and schools (Matsuda et al., 1993, *Words That Wound*).

Psychological and Social Effects

1. Stigma and Identity Threat

Psychological research shows that exposure to slurs can trigger stress responses and reduce cognitive performance, a phenomenon linked to *stereotype threat* (Steele & Aronson, 1995, *Journal of Personality and Social Psychology*). Slurs act as reminders of social devaluation, reinforcing marginalization.

2. Interpersonal Dynamics

In social interactions, the utterance of a slur can instantly transform relationships. Goffman’s (1963) concept of *spoiled identity* applies here: once a slur is used, it marks the target as socially tainted, altering how others perceive and interact with them. Even when used “jokingly,” slurs can maintain asymmetrical power relations by normalizing derogation under the guise of humor (Billig, 2005).

3. Reappropriation as Healing or Division

When members of a marginalized group use slurs among themselves, the social outcome depends on context. It can foster unity and resilience (Galinsky et al., 2013), but it can also reproduce internalized hierarchies if the underlying stigma remains unchallenged. The same word may thus serve as both a tool of empowerment and a reminder of oppression.

Broader Historical Patterns

Summary

Across history, slurs have served as linguistic instruments of power—tools to define, exclude, and control. Yet they have also become sites of contestation and transformation. The social interactions surrounding slurs—from violence to solidarity—reveal how language both mirrors and molds social structures. Studying their historical trajectories allows us to see that slurs are not static relics of prejudice, but evolving symbols through which societies negotiate identity, hierarchy, and belonging.

Selected References

– Ahmed, S. (2004). *The Cultural Politics of Emotion.* Routledge.

– Billig, M. (2005). *Laughter and Ridicule: Towards a Social Critique of Humour.* Sage.

– Bourdieu, P. (1991). *Language and Symbolic Power.* Harvard University Press.

– Butler, J. (1997). *Excitable Speech: A Politics of the Performative.* Routledge.

– Cameron, D. (1995). *Verbal Hygiene.* Routledge.

– Fanon, F. (1952). *Black Skin, White Masks.* Grove Press.

– Galinsky, A. D., et al. (2013). “The Reappropriation of Stigmatizing Labels.” *Psychological Science*, 24(10), 2020–2029.

– Ignatiev, N. (1995). *How the Irish Became White.* Routledge.

– Jagose, A. (1996). *Queer Theory: An Introduction.* NYU Press.

– Matsuda, M. J., et al. (1993). *Words That Wound: Critical Race Theory, Assaultive Speech, and the First Amendment.* Westview Press.

– Steele, C. M., & Aronson, J. (1995). “Stereotype Threat and the Intellectual Test Performance of African Americans.” *Journal of Personality and Social Psychology*, 69(5), 797–811.

In essence

The historical appearance of slurs and their social consequences illuminate how language operates as a living record of power relations. Each era’s slurs tell us not only what people said—but what they believed about who belonged, who did not, and who had the authority to decide.

My continuation…

The thought occurred to me that in the history of slurs we have to include those that are not spoken conceptual slurs but economically dependent slurs, religiously oriented, … I wonder if you would be able to connect that with the current immigration policies that are being followed in the United States at this time. thank you

ChatGPT responds…

That is a profound and perceptive extension of the discussion. You are absolutely right that *slurs* need not be limited to spoken words. They can take conceptual, institutional, or structural forms—embedded in economic systems, immigration policies, and religious or cultural hierarchies. In this broader sense, “slurring” operates as a *social process of devaluation* rather than merely as a linguistic act.

Conceptual and Structural Slurring

1. Beyond Language: Symbolic and Institutional Devaluation

Pierre Bourdieu (1991) described language as one form of *symbolic power*—but he also emphasized that symbolic violence extends beyond speech to include the institutionalized ways societies assign value to certain groups. Conceptual slurs are those embedded in policy, law, or economic practice that implicitly mark some populations as less worthy, less capable, or less “deserving” of inclusion.

For example, sociologist Eduardo Bonilla-Silva (1997) coined the term *color-blind racism* to describe how ostensibly neutral systems perpetuate racial hierarchies without overtly racist language. Similarly, Iris Marion Young (1990) identified *structural injustice* as a condition in which social processes—such as labor markets or immigration systems—systematically disadvantage certain groups even without explicit prejudice.

2. Economic Slurring

Economic slurs manifest when entire populations are characterized through economic stereotypes—“burdens,” “job-takers,” or “drains on resources.” These labels, though not always spoken as insults, function as discursive justifications for exclusionary policy. Ruth Wilson Gilmore (2007) calls this the “criminalization of surplus populations,” where marginalized groups are framed as economically expendable (*Golden Gulag*).

Historically, such economic slurring accompanied anti-immigrant sentiment: Chinese laborers in the 19th century were depicted as “coolies” undermining white labor; during the Great Depression, Mexican workers were scapegoated as economic threats, leading to mass deportations. These narratives persist in modern rhetoric about immigrants “taking jobs” or “draining welfare systems.”

3. Religious and Cultural Slurring

Religious slurs—especially those targeting Muslims, Jews, and other minority faiths—often operate through policy discourse rather than direct insult. Edward Said’s *Orientalism* (1978) remains foundational here: he showed how Western societies constructed the “Oriental” as irrational, backward, and threatening, a conceptual slur that justified colonial domination.

In the modern context, this logic reappears in policies that treat certain religious identities as inherently suspect. The post-9/11 era saw the rise of “security slurs”—where Muslim identity itself became a proxy for danger, leading to surveillance, profiling, and travel restrictions.

Connection to Contemporary U.S. Immigration Policy

1. The Language of Policy as a Site of Slurring

Contemporary U.S. immigration policy often employs coded language that performs the same social function as slurs: it categorizes and devalues. Terms like “illegal alien,” “chain migration,” or “invasion” carry moral and racialized undertones. Linguist Otto Santa Ana (2002) demonstrated how metaphors in media and policy—such as describing immigrants as “floods” or “waves”—dehumanize migrants and frame them as natural disasters rather than people (*Brown Tide Rising*).

These conceptual slurs shape public perception and justify restrictive measures, from family separation to asylum deterrence. They transform human movement into a perceived threat, echoing the historical function of slurs as boundary-enforcing tools.

2. Economic Framing and Exclusion

Current debates about immigration often center on economic worthiness—who contributes versus who “costs.” This reflects a neoliberal form of slurring: individuals are valued according to their market productivity. Policies favoring high-skilled immigrants while restricting refugees or low-wage workers implicitly construct a hierarchy of human value. As sociologist Tanya Golash-Boza (2015) argues, deportation systems are structured around *racialized economic utility*—those deemed less economically valuable are more easily excluded (*Deported: Immigrant Policing, Disposable Labor, and Global Capitalism*).

3. Religious and Cultural Dimensions

Policies such as the 2017 “travel ban” targeting several Muslim-majority countries illustrate how religious identity becomes a policy category. Though framed in security terms, the ban’s selective application and rhetoric reflected long-standing cultural slurs associating Islam with violence or backwardness. This institutionalized a conceptual slur—transforming prejudice into bureaucratic practice.

4. Border Politics and Symbolic Violence

The U.S.–Mexico border itself functions as a symbolic site of slurring. Anthropologist Nicholas De Genova (2013) describes the border as a “spectacle of enforcement” that dramatizes the illegality of certain bodies (*The Borders of ‘Europe’*). The wall, detention centers, and militarized language are not just physical barriers—they are communicative acts that label migrants as perpetual outsiders.

The Continuum Between Slurs and Policy

Summary

When we expand the concept of slurring beyond spoken words, we uncover a continuum of symbolic violence that stretches from interpersonal insult to institutional policy. Economic, religious, and immigration discourses often function as *conceptual slurs*—they label, devalue, and exclude entire populations under the guise of neutrality or legality.

Contemporary U.S. immigration policy continues this historical pattern: its rhetoric and structure embed hierarchies of worth that echo older forms of linguistic prejudice. Understanding this connection allows us to see that slurs—whether spoken or systemic—are not merely about offense, but about power: who has the authority to define others, and whose humanity is recognized in the process.

Selected References

– Bourdieu, P. (1991). *Language and Symbolic Power.* Harvard University Press.

– Bonilla-Silva, E. (1997). “Rethinking Racism: Toward a Structural Interpretation.” *American Sociological Review*, 62(3), 465–480.

– De Genova, N. (2013). “Spectacles of Migrant ‘Illegality’: The Scene of Exclusion, the Obscene of Inclusion.” *Ethnic and Racial Studies*, 36(7), 1180–1198.

– Gilmore, R. W. (2007). *Golden Gulag: Prisons, Surplus, Crisis, and Opposition in Globalizing California.* University of California Press.

– Golash-Boza, T. (2015). *Deported: Immigrant Policing, Disposable Labor, and Global Capitalism.* NYU Press.

– Said, E. (1978). *Orientalism.* Pantheon Books.

– Santa Ana, O. (2002). *Brown Tide Rising: Metaphors of Latinos in Contemporary American Public Discourse.* University of Texas Press.

– Young, I. M. (1990). *Justice and the Politics of Difference.* Princeton University Press.

In essence, the same logic that once animated overt ethnic slurs now often operates through economic, religious, and policy language. The vocabulary may have changed, but the underlying social function—defining who belongs and who does not—remains strikingly consistent.

My final thought…

The thought occurred to me that the current situation in the United States in relation to immigrants is about to change that we’ve had did a a very hard nosed deport everybody posture excessively which in my mind will result in the upcoming election in a a major swing away from the autocratic control and the violence that were seeing I’m wondering what kind of thoughts that we might have about what can be done between now and the election to address in a real way our relationship to immigration well thank you for your thoughts

ChatGPT’s final thoughts

That is a thoughtful and timely reflection. The United States is indeed at a pivotal moment in how it defines its relationship to immigration—morally, politically, and economically. The “deportation-first” posture that has dominated recent discourse has produced not only humanitarian crises but also deep divisions in national identity and policy coherence. As public sentiment shifts and the next election approaches, there is an opening for a more constructive, humane, and sustainable approach to immigration.

Understanding the Current Landscape

The current U.S. immigration environment is characterized by several overlapping dynamics:

– “Escalation of Enforcement:” Expanded detention, accelerated removals, and state-level cooperation with federal deportation programs have intensified since 2017, with continued emphasis on deterrence rather than integration.

– “Humanitarian Strain:” The asylum system remains backlogged, with over two million pending cases as of 2024. Families and unaccompanied minors face prolonged uncertainty and inadequate legal representation.

– “Economic Contradictions:” The U.S. labor market depends heavily on immigrant labor, especially in agriculture, construction, and healthcare, yet policy continues to criminalize the very workers who sustain these sectors.

– “Public Opinion Shifts:” Recent polling (Pew Research, 2023) indicates growing support for pathways to citizenship and humane border management, especially among younger and suburban voters, suggesting a potential electoral realignment.

Pathways for Constructive Action Before the Election

1. Shift the Narrative from Security to Contribution

Public discourse can be reframed to highlight immigrants as contributors rather than threats. Research by the Migration Policy Institute and Brookings Institution shows that local storytelling—emphasizing immigrants’ roles in community building, healthcare, and entrepreneurship—significantly reduces polarization.

  • Action:” 
  • Support community-based media and civic organizations that amplify immigrant voices and local success stories.
  • Goal:” 
  • Rehumanize immigration as a shared social investment rather than a partisan wedge issue.

2. Expand Legal Pathways and Reduce Backlogs

Administrative measures can be taken even without new legislation. The executive branch has authority to streamline visa processing, expand Temporary Protected Status (TPS), and prioritize family reunification.

  • Action:” 
  • Encourage advocacy for executive reforms such as expanding humanitarian parole and modernizing work visa categories.
  • Goal:
  • Replace the reactive enforcement model with a proactive, orderly system that reflects economic and humanitarian realities.

3. Strengthen Local Integration Policies

Cities and states have become laboratories for inclusive immigration policy. Sanctuary jurisdictions, municipal ID programs, and local legal aid initiatives have proven effective in improving safety and economic participation.

  • Action:” 
  • Support local coalitions that provide legal defense funds, language access, and workforce integration programs.
  • Goal:” 
  • Build resilience at the community level while federal policy remains gridlocked.

4. Address Root Causes in Foreign Policy

Migration is inseparable from global inequality, climate change, and political instability. U.S. policy toward Latin America and the Caribbean, for example, has historically oscillated between neglect and intervention.

  • Action:
  • Advocate for development partnerships that target climate adaptation, anti-corruption, and rural livelihoods in sending countries.
  • Goal:” 
  • Reduce forced migration by addressing structural causes rather than militarizing borders.

5. Promote Civic and Electoral Engagement

Immigrant communities and their allies can influence policy through participation. Naturalization rates have risen sharply since 2020, and newly naturalized citizens represent a significant potential voting bloc.

  • Action:” 
  • Support voter registration drives, civic education, and coalition-building across racial and generational lines.
  • Goal:” 
  • Translate demographic change into democratic representation.

The Broader Ethical and Political Frame

The immigration debate is not only about policy—it is a moral referendum on what kind of society the United States aspires to be. Hannah Arendt’s insight that “the right to have rights” begins with belonging remains relevant: when a nation defines people as deportable, it erodes its own democratic foundations.

https://en.wikipedia.org/wiki/Hannah_Arendt

The upcoming election may therefore hinge not just on border control but on the broader question of whether the U.S. will continue to treat migration as a crisis or as a permanent, generative feature of its identity. A shift away from punitive approaches toward pragmatic humanism could signal a reassertion of democratic values over fear-based governance.

Summary

Between now and the election, meaningful progress on immigration can occur through narrative change, administrative reform, local action, and civic mobilization. The goal is not only to manage migration but to redefine it as a shared human and national project.

If the political tide does turn, it will likely reflect a collective recognition that coercion and exclusion have failed—and that the future of American democracy depends on embracing immigration as both a moral responsibility and a source of renewal.

Understanding STEM

Leave a comment

What follows is an interchange of information between me and chatGPT-4.  There are a number of reasons that I publish this.  First, is the subject matter.  Everyone needs more knowledge, skills and maturity.  Thus our culture’s commitment to Educating the next generation (as well as the older ones).  Secondly, to demonstrate the conversational type (chat) exchange, in acquiring data and requesting first drafts.  Thirdly, to assist teachers in drawing a line, when the culture won’t.  The culture is stuck in the era where copyrights were given to printers.  Not to a knowledgable educator.  The law is ‘out of date’ and needs to be retired. Forthly

I initiate the exchange with …

Pretend you are a professor of computer science and a student comes to you with an interest in the STEM educational movement in the United States, and is curious about the history within the United States and in other countries of the world. How would you direct their inquiry?

ChatGPT response …

If a student came to me with a serious interest in the STEM education movement—its origins, evolution, and global variations—I would treat it not simply as an educational trend, but as a policy response to economic competition, technological change, and national security concerns. STEM education is best understood as a recurring strategy nations use when they perceive themselves to be falling behind in science, industry, or innovation.

I would structure the inquiry historically, comparatively, and analytically.

Step 1: Clarify What “STEM” Means  https://en.wikipedia.org/wiki/Science,_technology,_engineering,_and_mathematics

Before beginning archival or policy research, I would ask the student to interrogate the term itself.

– When did the acronym “STEM” emerge?

– What earlier labels existed (e.g., “science education,” “technical education,” “industrial arts”, “math”, “engineering”)?

– What disciplines are included or excluded?

– Is STEM primarily workforce policy, national security policy, equity policy, or pedagogical reform?

The acronym gained prominence in the early 2000s, but the movement’s intellectual roots are much older.

Step 2: Build a U.S. Historical Framework

I would guide the student through key inflection points.

Early Industrial and Progressive Era (Late 19th–Early 20th Century)

– Expansion of engineering schools through land-grant universities.

– Morrill Acts establishing practical and agricultural education.

– Rise of industrial arts in K–12 systems.

Key research question:

Was technical education originally about democratization of knowledge or industrial workforce preparation?

World War II and Federal Science Mobilization

– Massive federal investment in research.

– Institutionalization of federal–university partnerships.

– Expansion of engineering and physics education.

This period established the research university model that still underpins STEM.

The Sputnik Shock (1957)

This is a central pivot.

– National Defense Education Act.

– Major investment in math and science curricula.

– Framing STEM as national security imperative.

I would ask the student to examine how geopolitical anxiety reshaped curriculum standards.

Cold War to Post-Cold War

– Expansion of Advanced Placement science and math.

– Growth of computer science education.

– Increasing attention to global competitiveness.

By the 1980s, economic competition with Japan shifted rhetoric from military rivalry to industrial productivity.

The 21st Century STEM Consolidation

– Formalization of the STEM acronym.

– Emphasis on innovation economy.

– Coding initiatives in K–12.

– Diversity and inclusion framing.

– Public–private partnerships in curriculum design.

The student should analyze how STEM evolved from defense-driven urgency to innovation-driven economic strategy.

Step 3: Comparative International Context

A serious inquiry must go beyond the United States.

Soviet Union

– Centralized science education.

– Strong emphasis on mathematics and physics.

– Elite technical institutes feeding state planning apparatus.

Key question:

How did centralized planning shape STEM pipelines differently from decentralized U.S. systems?

Germany

– Dual education system combining apprenticeships and formal schooling.

– Strong vocational-technical track.

– Industrial alignment without abandoning liberal education.

Contrast:

Germany integrates technical training into secondary education more systematically than the U.S.

East Asian Models (Japan, South Korea, Singapore, China)

– High national investment in math and science performance.

– Rigorous standardized systems.

– Heavy state direction in curriculum priorities.

– Strong societal emphasis on technical achievement.

China in particular has combined rapid university expansion with large-scale state research funding.

Research direction:

Compare centralized national strategy with U.S. federal–state fragmentation.

Finland

– Strong teacher preparation.

– Less standardized testing pressure.

– High performance without heavy STEM branding.

This challenges the assumption that aggressive STEM rhetoric is required for excellence.

Step 4: Identify Core Themes Across Nations

I would encourage organizing comparative work around several dimensions:

A. State Centralization vs. Local Control

– U.S.: Federal funding, local curriculum control.

– France, China: Centralized curriculum mandates.

– Germany: Federal structure with vocational standardization.

B. Vocational vs. Academic Orientation

– U.S.: Historically academic, weaker vocational integration.

– Germany: Strong vocational parity.

– East Asia: Academic competition feeding technical universities.

C. Equity and Access

– Gender disparities in engineering.

– Racial and socioeconomic disparities in the U.S.

– State-driven gender equity campaigns in some Nordic countries.

D. Innovation vs. Standardization

– Does high standardized performance correlate with innovation?

– How do entrepreneurial ecosystems interact with STEM pipelines?

Step 5: Policy Drivers Behind STEM

The student should identify recurring drivers:

– Military competition.

– Economic rivalry.

– Technological disruption.

– Labor shortages in high-skill sectors.

– Political narratives about decline.

STEM initiatives often surge during perceived national vulnerability.

Step 6: Examine the Role of Computer Science

As a computer science professor, I would push deeper into computing education.

– When did computer science become distinct from mathematics?

– How did coding enter K–12 curricula?

– What is the relationship between computational thinking and broader STEM identity?

– Are we witnessing a shift from STEM to STEAM (adding arts)?

Computing often functions as the bridge between traditional science education and digital economy demands.

Step 7: Critically Evaluate the Movement

A rigorous inquiry must include critique.

Questions to pursue:

– Has STEM education measurably improved innovation outcomes?

– Are workforce shortages real or cyclical?

– Does overemphasis on STEM crowd out humanities?

– Is STEM rhetoric sometimes political branding rather than structural reform?

Historically, educational movements can become symbolic without structural follow-through.

Step 8: Methodological Approach

I would recommend:

– Policy document analysis.

– Legislative history review.

– International comparative data (PISA, TIMSS).

– Enrollment and graduation statistics.

– Interviews with educators or policymakers.

– Archival review of curriculum reforms.

The student should avoid anecdotal narratives and rely on institutional evidence.

Step 9: Potential Thesis Directions

Strong research directions might include:

– STEM as Cold War legacy policy architecture.

– Comparative analysis of U.S. and German technical education models.

– The evolution of computer science in K–12 curricula.

– STEM and economic competitiveness rhetoric from Sputnik to AI.

– Gender and diversity policy in STEM across nations.

Each topic connects historical context with contemporary relevance.

Step 10: Contemporary Inflection Point: AI and Automation

Finally, I would situate the inquiry in the present.

Artificial intelligence, automation, and digital transformation are reshaping:

– What counts as foundational technical literacy.

– Which STEM fields are prioritized.

– How early coding and data science are introduced.

– Whether computational thinking becomes universal.

The student should ask:

Is the current AI-driven expansion of STEM comparable to the Sputnik moment? Or is it structurally different because of globalization and private-sector dominance?

Concluding Guidance

I would advise the student to treat STEM not as a static educational category, but as a strategic policy instrument repeatedly reshaped by geopolitical, economic, and technological pressures. The U.S. case reveals cycles of urgency tied to competition, while international comparisons show alternative institutional designs that achieve similar or superior outcomes through different governance models.

A strong inquiry will balance historical narrative, policy analysis, and comparative international study—while maintaining a critical lens toward both the promises and limitations of the STEM educational movement.

My response …

STEM activities traditionally involve the sharing of information over Internet communication sources.  How do you see this affecting copyrights and intellectual property ownership in general?

ChatGPT response …

The expansion of STEM activity through internet-based collaboration has fundamentally altered how intellectual property is created, shared, protected, and contested. What once occurred within laboratories, classrooms, or corporate R&D divisions now unfolds across cloud platforms, open repositories, global forums, and AI-assisted environments. This shift challenges traditional copyright and intellectual property (IP) frameworks that were designed for geographically bounded, individually authored works.

Below is how I would frame the issue analytically.

1. The Shift from Scarcity to Replicability

Traditional copyright law developed in a context where copying required physical reproduction. Digital STEM work—code, datasets, simulations, designs—can be copied and distributed at near-zero marginal cost.

Consequences:

– Enforcement becomes more difficult.

– Attribution becomes more complex.

– Ownership boundaries blur.

– Unauthorized redistribution becomes structurally easier.

The internet does not eliminate copyright; it stresses its enforceability model.

2. STEM Culture and the Rise of Open Licensing

STEM disciplines, particularly computer science, have cultivated open-sharing norms:

– Open-source software (GPL, MIT, Apache licenses).

– Open-access research publishing.

– Creative Commons educational materials.

  • Public code repositories (e.g., Git-based collaboration).

Open licensing does not eliminate ownership; it redefines it contractually.

3. Collaborative Authorship and Attribution Complexity

STEM projects increasingly involve:

– Multi-author research papers.

– Distributed code contributions.

– Crowdsourced problem-solving.

– International research teams.

This complicates:

– Determining who holds copyright.

– Allocating patent inventorship.

– Enforcing derivative work restrictions.

– Managing cross-border jurisdiction issues.

Traditional IP law presumes identifiable creators; distributed collaboration diffuses that clarity.

4. Educational Context and Copyright Tensions

STEM education increasingly relies on:

– Online lectures.

– Shared slides and recorded demonstrations.

– Student code submissions in shared repositories.

– Cloud-based learning platforms.

Key tensions include:

– Who owns student-created code?

– Can instructors reuse student work?

– Do platform providers claim rights through terms of service?

– How do fair use doctrines apply in digital classrooms?

Universities often default to institutional policies that distinguish between student and faculty IP ownership, but cloud-based tools complicate this.

5. AI as an Amplifier of IP Complexity

Artificial intelligence introduces new stress points:

– AI models trained on copyrighted materials.

– AI-generated code or designs.

– Derivative works created algorithmically.

– Difficulty tracing original sources in training data.

Core questions emerging:

– Is AI-generated output copyrightable?

– Who owns AI-assisted inventions?

– Does training on public data constitute infringement?

– How do open-source licenses apply to AI-generated derivatives?

AI compresses creation cycles and blurs authorship.

6. Patent Law and STEM Innovation

Beyond copyright, STEM activity frequently intersects with patent law:

– University technology transfer offices.

– Startup formation from academic labs.

– Government-funded research and Bayh–Dole implications.

– International patent filing strategies.

The internet accelerates idea diffusion, which can:

– Undermine novelty requirements.

– Trigger prior art disclosures.

– Increase defensive patenting.

Speed of publication can inadvertently affect patentability.

7. Globalization and Jurisdictional Fragmentation

STEM collaboration is inherently transnational.

Complications include:

– Differing copyright durations.

– Divergent moral rights doctrines.

– Varied enforcement mechanisms.

– Inconsistent treatment of software patents.

A project hosted in one country may include contributors from many legal regimes, raising enforcement and ownership ambiguities.

8. Economic Incentives and Innovation Models

The core philosophical tension remains:

– IP as incentive for innovation.

– Open sharing as accelerator of innovation.

Historically, strong IP rights were justified to encourage investment.

In STEM fields like software, innovation often occurs rapidly through open ecosystems rather than proprietary isolation.

Hybrid models now dominate:

– Companies open-source core libraries while monetizing services.

– Universities patent selectively while publishing broadly.

– Firms release APIs while protecting proprietary algorithms.

9. Cultural Impact on Ownership Norms

STEM internet culture emphasizes:

– Attribution over exclusion.

– Contribution metrics.

– Reputation systems.

– Community governance.

This shifts the social meaning of ownership from exclusivity toward recognition and collaborative credit.

However, commercial integration reintroduces exclusivity pressures, especially in AI, biotech, and semiconductor industries.

10. Likely Future Developments

Based on structural trends, probable directions include:

– Expansion of standardized open licensing frameworks.

– Stronger algorithmic transparency requirements.

– Legal clarification around AI-generated works.

– Platform-based IP governance systems.

– Increased institutional training on IP literacy in STEM curricula.

Educational systems may increasingly teach not only coding and engineering, but also licensing strategy and IP compliance.

Structural Insight

Internet-based STEM activity does not eliminate intellectual property; it transforms it from a static ownership claim into a negotiated governance mechanism within distributed networks.

Where 20th-century IP centered on controlling duplication, 21st-century IP increasingly concerns:

– Managing collaboration.

– Structuring incentives.

– Balancing openness and commercialization.

– Defining accountability in AI-mediated creation.

Concluding Perspective

The integration of internet communication into STEM activity has shifted intellectual property from a primarily defensive legal instrument to a strategic architectural component of innovation ecosystems. Copyright and IP ownership now operate within fluid, global, AI-assisted networks where attribution, licensing design, and institutional policy matter as much as formal legal rights.

The long-term equilibrium will likely not be a retreat from IP protection, but a continued evolution toward hybrid regimes that combine open collaboration with structured ownership frameworks aligned to digital and AI-driven realities.

My response …

I know of some STEM teachers who are reluctant to send their students to sharing websites where the question of ownership and copyright infringement exist.  How would you advise these educators; to keep their students involved in the movement yet respect ownership rights?

ChatGPT response …

Your concern reflects a legitimate tension: STEM education thrives on collaboration and sharing, yet educators carry responsibility for protecting students’ intellectual property and avoiding infringement. The solution is not withdrawal from digital platforms, but structured participation with clear governance, literacy, and safeguards.

Below is how I would advise educators to balance engagement and protection.

  1. Reframe the Issue as internet Literacy, Not Platform Avoidance

Avoiding sharing platforms does not eliminate risk; it delays students’ exposure to realities they will face in higher education and industry.

Instead:

– Teach “intellectual property as part of STEM competence”.

– Integrate copyright, licensing, and attribution into project rubrics.

– Require students to document sources and licensing choices.

Internet literacy becomes a professional skill rather than a compliance burden.

2. Use Controlled or Institutional Platforms First

Before using fully public repositories, begin with structured environments:

– School-managed learning management systems.

– Private Git repositories under institutional accounts.

– Closed collaboration tools with teacher oversight.

– District-approved cloud environments.

3. Teach Licensing Explicitly

Students learn that publishing without a license creates ambiguity.

Introduce:

– “Creative Commons licenses” for written or multimedia work.

– The difference between permissive and copyleft licenses.

– Attribution requirements.

Have students include a LICENSE file in code projects and practice proper attribution formatting.

4. Separate Learning Projects from Commercially Sensitive Work

Clarity reduces accidental disclosure.

5. Teach “Public by Design” Project Development

When students know a project may be shared:

– Use only properly licensed datasets.

– Avoid copyrighted images without permission.

– Generate original graphics or use open media.

– Document all external dependencies.

Designing for openness prevents infringement later.

6. Address Student Ownership Transparently

Teachers should clarify:

– Who owns student-created work under district policy.

– Whether platforms claim rights through terms of service.

– Whether parental consent is required for minors.

– How students can remove or revise work later.

Transparency builds trust and informed participation.

7. Model Attribution and Ethical Use

Teachers can:

– Demonstrate proper citation of code libraries.

– Show how to credit GitHub contributors.

– Explain why copying code without attribution is problematic.

– Discuss real-world IP disputes in technology.

Modeling behavior normalizes professional standards.

8. Introduce Incremental Public Engagement

Students do not need to publish globally on day one.

A gradual approach:

– Internal classroom sharing.

– School-level showcase.

– District showcase.

– Optional public repository for advanced students.

Choice matters. Not every student must publish publicly.

9. Establish Clear Classroom Policies

Create a simple written framework:

– No uploading copyrighted material without permission.

– All external code must be cited.

– Students choose a license before publishing.

– Teacher review required before public release.

Consistency reduces uncertainty.

10. Emphasize Long-Term Professional Preparation

Industry norms expect:

– Familiarity with open-source ecosystems.

– Comfort with public code review.

– Understanding of contribution guidelines.

– Awareness of IP compliance.

Shielding students entirely from public collaboration may leave them unprepared for modern technical environments.

11. Address Fear Through Structure

Reluctance often stems from fear of:

– Student work being stolen.

– Students inadvertently infringing.

– Legal liability.

– Platform misuse.

Mitigation strategies include:

– Using pseudonymous accounts when appropriate.

– Reviewing platform terms of service.

– Providing digital citizenship training.

– Coordinating with district legal advisors when needed.

Fear decreases when procedures are clear.

12. Treat IP as a Teachable Ethical Framework

Rather than framing copyright as restriction, present it as:

– A system that balances innovation and protection.

– A design choice in how creators share their work.

– A foundational element of professional integrity.

Students who understand ownership structures become more confident collaborators, not more cautious participants.

In summary, educators do not need to withdraw from sharing platforms to protect intellectual property. By embedding licensing education, establishing clear policies, using controlled environments initially, and gradually introducing public participation, teachers can keep students fully engaged in STEM while respecting ownership rights and professional standards.

My response …

Please give advice as to how to handle ‘file sharing’ sites such as ‘thingiverse.com’ with respect to the copyright infringements? Also the prevalence of viruses and other malicious attachments that might get into the student’s computer.

 thank you

ChatGPT response …

File‑sharing platforms such as ‘Thingiverse’ and similar maker repositories can be valuable for STEM learning, but they introduce two parallel risks: “copyright infringement” and “malware exposure”. Schools should approach these platforms through structured governance rather than informal use. The goal is controlled participation that protects students, networks, and intellectual property while preserving educational value.

Below is a practical framework an IT department can implement.

1. Establish a Formal Platform Approval Process

Before allowing access:

– Conduct a “risk assessment” of the site’s terms of service.

– Review its copyright complaint procedures.

– Evaluate moderation policies and content filtering.

– Assess historical security incidents or breach history.

– Confirm whether accounts are required and how data is stored.

Approved platforms should be documented in district policy.

2. Implement Network-Level Safeguards

IT should reduce technical risk before students access any file-sharing repository.

Recommended measures:

– Deploy “web filtering and DNS filtering” to restrict unsafe domains.

– Use “endpoint protection software” with real-time scanning.

– Enable “sandboxing” for downloaded files.

– Disable executable file types unless required for instruction.

– Apply strict download policies on student devices.

– Separate student VLANs from critical administrative networks.

For 3D printing environments specifically, ensure firmware updates are managed centrally.

3. Control File Types and Execution

Most STEM file-sharing sites distribute:

– STL files (3D models)

– G-code files

– Source code

– Images

– Design files

Risks increase with executable or script files.

IT policies should:

– Restrict .exe, .bat, .ps1, .jar files unless required.

– Automatically scan compressed files (.zip, .rar).

– Block macro-enabled documents unless vetted.

– Require teacher review before execution of downloaded scripts.

Even non-executable files can contain malicious payloads if improperly handled.

4. Use Sandboxed or Isolated Environments

When students need to test code or open unfamiliar files:

– Use virtual machines.

– Deploy containerized environments.

– Provide school-managed cloud IDEs.

– Create restricted lab computers for experimental downloads.

Isolation limits spread if malware is introduced.

5. Develop Clear Copyright and Usage Guidelines

Technical safety alone is insufficient. IT and instructional leadership should publish clear guidance:

– Students may download only files with clearly stated licenses.

– Students must document source URLs and license types.

– Redistribution of downloaded content must follow license terms.

– No uploading copyrighted commercial designs.

– Teacher approval required before uploading original work.

Encourage use of Creative Commons and open-source licensed content.

6. Teach Students to Evaluate Licenses

Provide short training modules explaining:

– Public domain vs. Creative Commons.

– Attribution requirements.

– Non-commercial restrictions.

– Share-alike provisions.

– Open-source code licenses.

Students should practice identifying license statements before downloading.

7. Monitor and Log Activity Responsibly

IT should:

– Maintain activity logs for compliance and security review.

– Monitor unusual download patterns.

– Set alert thresholds for suspicious activity.

– Conduct periodic audits of high-risk file types.

Monitoring should be transparent and aligned with student privacy policies.

8. Require Teacher Mediation for Younger Students

For elementary and middle school environments:

– Block direct student accounts where possible.

– Allow teachers to curate and distribute approved files.

– Maintain a shared district repository of pre-vetted resources.

This reduces exposure to both inappropriate content and malicious files.

9. Maintain an Approved Resource Repository

Instead of relying entirely on external platforms:

– Create a district-managed repository of vetted designs.

– Store approved STL files, datasets, and code examples.

– Maintain version control and license documentation.

– Remove files that violate policy.

This hybrid model preserves creativity while limiting risk.

10. Update Incident Response Protocols

Ensure procedures exist if:

– Malware is detected on a student device.

– A copyright complaint is received.

– Unauthorized uploads occur.

– Infringing material is distributed.

Response plans should include:

– Immediate isolation of affected devices.

– Forensic scanning.

– Communication with administration.

– Documentation of actions taken.

Preparedness reduces disruption.

11. Address 3D Printing–Specific Risks

With sites like Thingiverse:

– Verify STL files for geometry errors before slicing.

– Scan slicing software updates centrally.

– Prevent use of unknown custom firmware.

– Ensure printers are not directly exposed to the open internet.

Compromised printer firmware can present both safety and network risks.

12. Coordinate with Legal and Curriculum Leadership

IT should not operate in isolation.

Coordinate with:

– District legal counsel on copyright compliance.

– Curriculum directors on instructional alignment.

– Teachers on acceptable use policies.

– Parents on digital safety expectations.

Shared governance prevents policy gaps.

13. Provide Ongoing Staff Training

Teachers need training on:

– Recognizing suspicious files.

– Understanding platform licensing structures.

– Proper attribution practices.

– Reporting security concerns promptly.

Regular updates prevent outdated assumptions.

Balanced Strategic Approach

File-sharing platforms are integral to modern STEM ecosystems. Blocking them entirely can limit authentic learning. Allowing unrestricted access creates legal and security vulnerabilities. The appropriate approach is structured access supported by technical safeguards, license education, teacher oversight, and documented policy.

When IT departments combine cybersecurity controls with intellectual property literacy, students can engage confidently with collaborative maker communities while minimizing both copyright infringement and malware exposure.

Work in AI era

Leave a comment

I initiated the conversation with …

Pretend you are a history professor specializing in labor relations. A student is interested in the history of self managed work related activities.  How would you direct that student in their research?

ChatGPT responded with …

If a student approached me with an interest in the history of “self-managed work-related activities”, I would frame the topic as a serious and intellectually rich field within labor history, industrial relations, and political economy. The concept spans worker control, cooperative enterprise, autonomous shop-floor governance, and participatory management. I would guide the student toward building both conceptual clarity and a structured historical arc.

Below is how I would direct the research.

Step 1: Define the Core Concept

Before diving into archives, I would require the student to clarify terminology. “Self-managed work” can refer to several distinct but related traditions:

– “Worker self-management” (workers collectively control enterprise decisions)

– “Producer cooperatives” (worker-owned and democratically governed firms)

– “Shop-floor autonomy” (informal worker control over pace and methods)

– “Participatory management” (management-sanctioned worker input systems)

– “Syndicalism and council communism”

– “Employee stock ownership models (ESOPs)”

The student should develop a working definition early and refine it as research progresses.

Step 2: Establish a Chronological Framework

I would recommend organizing the research historically rather than ideologically.

Early Industrial Period (19th Century)

– Craft guild traditions and early producer cooperatives.

– The Rochdale cooperative movement in Britain.

– Utopian socialist experiments (e.g., Owenite communities).

– Early labor republicanism in the United States.

Key question: Was self-management a survival strategy, a moral philosophy, or a political alternative to capitalism?

Revolutionary and Radical Movements (Late 19th–Early 20th Century)

– French and Spanish syndicalism.

– Russian factory committees (1917–1918).

– German workers’ councils after World War I.

– Italian factory occupations (1919–1920).

Key question: Under what political conditions does worker self-management emerge at scale?

Interwar and Post–World War II Experiments

– Yugoslav system of worker self-management.

– Spanish Civil War collectivizations.

– Israeli kibbutzim.

– Cooperative movements in Scandinavia and Italy (Emilia-Romagna).

Key question: Can self-management function within a state socialist framework or mixed economy?

Late 20th Century Workplace Reform Movements

– U.S. quality circles and participatory management (1970s–1980s).

– Mondragon Cooperative Corporation in Spain.

– Employee ownership and ESOP expansion.

– Shop-floor resistance and informal autonomy in industrial plants.

Key question: Is modern participatory management genuine self-management or managerial adaptation?

Contemporary Developments

– Worker-recuperated enterprises (e.g., Argentina after 2001).

– Platform cooperativism.

– Remote work autonomy models.

– Horizontal organizational structures in tech and creative sectors.

Key question: Does digital-era decentralization represent a new phase or a rebranding of older cooperative traditions?

Step 3: Thematic Research Axes

Beyond chronology, I would encourage thematic structuring.

A. Power and Control

– Who controls investment decisions?

– Who controls labor processes?

– How are conflicts resolved internally?

Encourage use of labor process theory, especially the work of Harry Braverman

B. Property and Ownership

– Legal structures of cooperatives.

– Differences between collective ownership and state ownership.

– Tension between capital accumulation and democratic governance.

C. Efficiency and Productivity

– Comparative economic performance.

– Long-term sustainability of cooperatives.

– Crisis resilience.

D. Democracy and Participation

– Voting structures.

– Representation models.

– Gender and racial inclusion in self-managed workplaces.

E. State Interaction

– Legal recognition.

– Regulatory barriers.

– Political suppression or support.

Step 4: Methodological Guidance

I would advise the student to combine:

– Archival research (union archives, cooperative records, state labor reports).

– Comparative case studies.

– Oral histories where possible.

– Economic performance analysis.

– Legal history review.

If quantitative analysis is included, encourage comparison of survival rates between conventional firms and cooperatives.

Step 5: Key Case Studies to Anchor the Project

Rather than attempting global coverage, I would suggest selecting two or three anchor cases for deep analysis:

– Mondragon (Spain)

– Yugoslav worker self-management

– U.S. plywood cooperatives in the Pacific Northwest

– Argentine worker-recuperated factories

– Emilia-Romagna cooperative networks

Depth produces stronger scholarship than broad generalization.

Step 6: Core Scholarly Debates

The student should engage major historiographical questions:

– Is worker self-management a transitional form or stable alternative?

– Does market competition erode democratic governance over time?

– Are successful cooperatives dependent on cultural context?

– Is self-management compatible with large-scale industry?

Encourage engagement with both sympathetic and critical scholarship.

Step 7: Research Questions I Would Push the Student Toward

– Under what economic crises does self-management emerge?

– What institutional structures predict longevity?

– Does worker control reduce labor conflict?

– How does internal democracy affect wage differentials?

– Does self-management transform class relations or adapt to them?

Strong research emerges from clear, testable, historically grounded questions.

Step 8: Practical Research Roadmap

I would suggest:

– Begin with a historiographical review essay.

– Narrow to a defined time period or geographic region.

– Identify primary source archives early.

– Construct a comparative analytical framework.

– Draft a literature matrix mapping arguments, evidence, and conclusions.

Intellectual Orientation

I would caution against romanticization. Self-management has:

– Produced durable enterprises.

– Failed due to undercapitalization.

– Collapsed under political repression.

– Been co-opted into managerial reform systems.

A balanced approach strengthens credibility.

Final Direction to the Student

I would advise the student to treat self-managed work not as a utopian curiosity, but as a recurring historical response to tensions between labor and capital. It emerges most often during periods of economic crisis, political upheaval, or institutional breakdown. The central scholarly task is to determine whether these experiments represent episodic disruptions or viable alternative governance systems within modern economies.

With careful comparative analysis, archival depth, and theoretical grounding, the topic can support undergraduate theses, graduate dissertations, or publishable research in labor history and industrial relations.

I continued the pondering with …

Considering the current status of labor work relationships in the United States and the western world in general, how would you couple the historic attempts at managing a beneficial relationship for everyone within the current situation.

ChatGPT responded with …

If we step back historically, labor–management relations have always oscillated between conflict and negotiated accommodation. Today’s landscape in the United States and much of the Western world—declining union density alongside renewed organizing energy, fissured workplaces, platform labor, employee ownership experiments, and ESG-era governance debates—can be read as another phase in a long cycle of institutional adaptation.

To couple historic attempts at building mutually beneficial work relationships with the current situation, I would structure the analysis around institutional patterns, not nostalgia.

The Long Arc: From Conflict to Compromise to Fragmentation

Historically, three major regimes shaped labor relations in industrial democracies:

– “Confrontational industrial capitalism (19th–early 20th century)”

Characterized by strikes, repression, craft traditions, and early cooperatives.

– “Mid-20th century social compact”

Collective bargaining, welfare-state expansion, internal labor markets, productivity-linked wage growth.

– “Late 20th century neoliberal restructuring”

Deregulation, globalization, financialization, subcontracting, decline in union density.

The current period shows elements of reconfiguration rather than simple continuation.

Parallels Between Past Experiments and Present Conditions

Several historical models are reappearing in updated forms.

1. Worker Self-Management and Employee Ownership

Historic precedents:

– Producer cooperatives.

– Yugoslav worker self-management.

– Mondragon federation.

– U.S. plywood cooperatives.

Contemporary echoes:

– ESOP growth in the U.S.

– Worker-recuperated firms.

– Platform cooperatives.

– Steward-ownership models.

Key linkage:

Both periods emerge amid dissatisfaction with traditional capital–labor hierarchies and concerns about inequality.

The difference:

Modern efforts operate within globalized capital markets and regulatory complexity that constrain autonomy.

2. Collective Bargaining and Sectoral Approaches

Historic mid-century model:

– Industry-wide bargaining.

– Strong union density.

– Wage compression.

– Predictable grievance procedures.

Current developments:

– Renewed union organizing in logistics, education, healthcare, and technology.

– Experiments with sectoral bargaining in parts of Europe.

– Legislative debates over gig worker classification.

Coupling insight:

The mid-century compromise relied on institutional density. Today’s fragmented workforce makes firm-level bargaining less stable, reviving interest in broader sectoral frameworks.

3. Participatory Management and Shop-Floor Autonomy

Historical roots:

– Works councils in Europe.

– Quality circles in the 1970s–1980s.

– Shop-floor negotiated autonomy in unionized plants.

Current manifestations:

– Agile teams.

– Self-directed work groups.

– Hybrid remote work structures.

– Project-based autonomy in knowledge industries.

Historical lesson:

Participation without power-sharing tends to revert to managerial control. Durable participation requires institutional backing—either legal (codetermination) or ownership-based.

4. Crisis as Catalyst

Historically, major shifts in labor relations followed crises:

– The Great Depression → Wagner Act and New Deal labor regime.

– Postwar reconstruction → European social partnership.

– 1970s stagflation → deregulation and restructuring.

Contemporary catalysts include:

– Financial crises.

– Pandemic disruptions.

– Supply chain instability.

– Automation and AI displacement pressures.

– Growing income inequality.

History suggests institutional reform becomes feasible when crisis destabilizes existing equilibria.

Despite parallels, current conditions diverge significantly from earlier eras.

These differences complicate direct transplantation of historic models.

What History Suggests About “Beneficial for Everyone”

Historically, stable and mutually beneficial arrangements shared several characteristics:

– Institutionalized bargaining mechanisms.

– Predictable dispute resolution.

– Clear legal frameworks.

– Shared productivity gains.

– Long-term investment horizons.

– Recognition of labor as a stakeholder, not merely a cost.

When those conditions eroded, instability followed.

Contemporary Reform Pathways Informed by History

Drawing from historical experience, plausible directions include:

A. Expanding Ownership Participation

– Broader ESOP incentives.

– Cooperative conversions during business succession.

– Hybrid models blending capital markets and worker governance.

Historical grounding: Durable when paired with managerial competence and capital access.

B. Sectoral or Multi-Employer Bargaining

– Wage boards.

– Industry standards agreements.

– Portable benefits systems.

Historical grounding: Effective in fragmented labor markets.

C. Legal Reinforcement of Worker Voice

– Works councils.

– Codetermination.

– Board-level employee representation.

Historical grounding: Long-standing in parts of Europe with measurable stability.

D. Redefining Employment Classification

– Addressing platform-mediated labor.

– Establishing baseline protections independent of classification.

Historical grounding: Labor law has repeatedly evolved in response to new production systems.

The Core Historical Pattern

Every durable labor settlement has balanced three forces:

– Capital’s need for flexibility and return on investment.

– Labor’s need for security and dignity.

– The state’s role as regulator and stabilizer.

When one of these dominates excessively, systemic correction tends to follow.

Strategic Framing for Research or Policy

To couple historic attempts with current conditions, I would advise focusing on:

– Institutional design rather than ideology.

– Incentive alignment rather than rhetoric.

– Empirical outcomes rather than theoretical purity.

– Scalability within globalized markets.

Concluding Perspective

The present moment resembles earlier transitional periods more than stable equilibrium phases. Historic experiments in self-management, sectoral bargaining, and participatory governance demonstrate that mutually beneficial labor relations are not accidental; they are constructed through institutions that embed shared risk and shared reward.

The contemporary challenge is not inventing entirely new models, but adapting proven governance principles—voice, shared gains, accountability, and legal clarity—to a digitized, globalized, and financially integrated economy. History does not provide a blueprint, but it offers a tested repertoire of institutional tools from which to build the next labor settlement.

I concluded with …

You have not touched upon perhaps the most important change in our culture which is about to happen;  The introduction of artificial intelligence and decision-making based upon strong historical knowledge, can you expand upon that?

ChatGPT response …

Artificial intelligence represents not simply another technological upgrade, but a structural shift in how decisions are made, how labor is directed, and how authority is exercised inside firms. From a labor-relations perspective, AI combines automation with managerial cognition—altering not only ‘what’ work is done, but ‘who decides’ how it is done. Historically, moments when technology reshaped decision-making authority—not just production tools—produced the most profound institutional consequences.

Below is how I would expand the analysis by situating AI within longer historical patterns.

Historical Precedents: When Technology Reorganized Authority

Major labor realignments followed technological transformations that shifted control over knowledge and coordination.

“Mechanization (19th century)”

– Shifted skill from artisan workers to machine systems.

– Reduced craft autonomy.

– Triggered early unionization and cooperative experiments.

“Scientific management (early 20th century)”

– Centralized knowledge in management.

– Broke tasks into measurable units.

– Separated planning from execution.

“Postwar automation and computing”

– Increased capital intensity.

– Created technical-managerial classes.

– Reinforced bureaucratic decision hierarchies.

Each of these reorganized authority structures. AI extends this trajectory by embedding decision logic into algorithmic systems.

What Makes AI Historically Distinct

AI differs in three ways:

– “Cognitive automation” rather than purely physical automation.

– “Algorithmic management at scale”, especially in platform and logistics sectors.

– “Data-driven prediction embedded in governance”, including hiring, evaluation, scheduling, and termination.

Historically, labor conflict has intensified when workers lose visibility into how decisions are made. AI intensifies opacity.

AI and the Labor Process

In labor process theory terms, AI deepens the separation between conception and execution. Traditionally:

– Workers executed.

– Managers planned.

– Executives allocated capital.

With AI:

– Planning functions are increasingly automated.

– Supervisory discretion narrows.

– Decision authority becomes encoded in software.

The critical shift is that control moves from identifiable human managers to distributed algorithmic systems.

Three Historical Analogies

1. Taylorism Revisited

Scientific management extracted tacit knowledge from skilled workers and codified it into managerial systems.

AI similarly:

– Extracts behavioral data.

– Codifies performance patterns.

– Optimizes workflow in real time.

Difference:

Taylorism required visible supervisors; AI embeds supervision invisibly.

2. Financialization and Shareholder Primacy

Late 20th-century financialization shifted firm decision-making toward short-term metrics.

AI accelerates metric dominance by:

– Quantifying micro-performance.

– Optimizing for measurable outputs.

– Reinforcing productivity surveillance.

Historical lesson:

Metric-driven systems often destabilize trust unless counterbalanced by institutional protections.

3. Platformization as Digital Piecework

Gig platforms resemble 19th-century piecework:

– Pay tied directly to task completion.

– Limited long-term security.

– Algorithmic allocation of work.

The historical piecework system generated high variability and labor unrest when pricing power was unilateral.

Institutional Risks Identified by History

Historically, when technology altered power asymmetries without institutional adjustment, outcomes included:

– Increased inequality.

– Labor fragmentation.

– Union revitalization or alternative organizing forms.

– Regulatory intervention.

AI raises similar structural risks:

– Decision opacity.

– Concentration of informational power.

– Wage polarization.

– Deskilling of middle-skill roles.

Potential Institutional Responses Informed by History

History suggests durable adaptation requires institutional recalibration.

A. Transparency Mechanisms

Comparable to early 20th-century labor standards laws, AI-era governance may require:

– Disclosure of algorithmic evaluation criteria.

– Auditability of automated decision systems.

– Worker access to performance data.

Opacity historically fuels instability.

B. Shared Gains Mechanisms

Productivity surges historically stabilized labor relations when gains were broadly distributed.

If AI increases output per worker:

– Profit-sharing systems.

– Broader employee ownership.

– Wage-indexed productivity formulas.

Absent redistribution, technological shocks have historically widened inequality.

C. Reclassification of Work

As with the industrial transition that created modern labor law, AI-driven gig systems challenge existing classifications.

History indicates:

Labor law evolves when production systems no longer fit inherited categories.

D. Worker Participation in Technological Deployment

In postwar Germany, codetermination helped align automation with workforce stability.

AI-era analogues could include:

– Worker representation in technology adoption committees.

– Negotiated implementation timelines.

– Joint governance of performance metrics.

Where workers are excluded from technological decisions, resistance tends to intensify.

Structural Divergences from Past Transitions

However, AI also introduces unprecedented conditions:

– Global simultaneity of adoption.

– Rapid scalability of algorithmic systems.

– Cross-sector impact including professional and knowledge work.

– Ability to replicate managerial judgment.

Unlike prior industrial shifts, AI directly affects white-collar authority structures, not just manual labor.

Strategic Historical Insight

Historically, stable labor systems emerge when three conditions are satisfied:

– Technological gains increase aggregate productivity.

– Institutional frameworks distribute benefits and risks.

– Legitimacy of decision-making processes is maintained.

AI disrupts all three simultaneously.

Probable Near-Term Developments

Drawing from historical patterns, likely trajectories include:

– Increased organizing in algorithmically managed sectors.

– Litigation over automated decision systems.

– Regulatory experimentation at state and national levels.

– Growth of hybrid ownership models as firms seek legitimacy.

– Internal corporate ethics frameworks for AI governance.

Periods of technological acceleration historically precede institutional codification.

Long-Term Structural Question

The central historical question becomes:

Will AI reinforce a highly centralized, data-controlled model of enterprise governance, or will it catalyze new participatory models where workers share in oversight and gains?

History indicates neither outcome is automatic. Technological architecture shapes possibilities, but institutions determine distribution.

Concluding Historical Assessment

Artificial intelligence is best understood not merely as a productivity tool, but as a restructuring of decision authority within firms. In previous eras, when technological systems shifted power asymmetries without compensatory institutional reform, instability followed. When governance structures evolved—through bargaining systems, codetermination, ownership reform, or regulatory frameworks—technological progress integrated into durable social compacts.

The AI transition represents a comparable inflection point. Its long-term labor impact will depend less on the technology itself and more on how societies embed it within systems of accountability, participation, and shared economic return.

Failure

Leave a comment

Failure may be your best option

Illustration of two children and a dog holding a banner that reads 'WE LOVE OUR TEACHERS!' in colorful lettering.

but … the Culture might use some improving …

I opened the AI collaboration with …

Pretend you are an educator with a strong background in social science and an interest in testing.  You have heard that a rocket launch failure is not a failure, but a database full of important information. And that many inventors have indicated that they learn more from their failures, than from their successes.  A student comes to you and wants to study the culture’s view of failing as opposed to the science/engineer view of failing. Particularly on how the culture’s view is reflected in the educational system. How would you direct that Student.

…ChatGPT …

When a student asks to study cultural views of failure versus scientific views of failure—especially how those views shape education—I would frame it as an investigation into “how societies interpret error”. In engineering, a failed rocket launch is data. In many classrooms, a failed exam is identity. That contrast is fertile ground for serious research.

Here is how I would direct the student.

Step 1: Clarify the Core Research Question

Encourage the student to sharpen the inquiry into something researchable.

Examples:

– How do scientific communities conceptualize failure compared to mainstream educational systems?

– How do cultural beliefs about failure influence grading practices?

– How do different national education systems respond to student error?

– Is failure framed as data, deficiency, or moral weakness?

A focused version might be:

> “How do cultural narratives about failure shape assessment practices and student motivation in formal education?”

Step 2: Establish Conceptual Frameworks

“Failure in Science” 

Button graphic with the text 'LEARN MORE' and a cursor pointer icon.

In scientific methodology:

Diagram illustrating the steps of the scientific method, including observation/question, research topic area, hypothesis, test with experiment, analyze data, and report conclusions.

– Failure is “hypothesis disconfirmation”

– Error is “information”

– Iteration is expected

– Replication corrects mistakes

The scientific method structurally assumes error:

Failure in this model is functional, not moral.

“Failure in Culture”

Cultural views of failure vary along dimensions such as: 

  1. Individualism vs collectivism
  2. Fixed vs growth beliefs about intelligence
  3. Honor/shame vs guilt-based social systems
  4. Meritocratic vs relational social structures

In some cultures:

– Failure implies lack of effort.

– In others, it implies lack of ability.

– In others, it implies moral weakness.

– In yet others, it signals resilience opportunity.

Step 3: Connect to Educational Systems

Education systems reflect cultural assumptions.

Button with the text 'LEARN MORE' and a cursor icon
DimensionScience-Oriented ModelCulture-Influenced Traditional Model
Role of mistakesExpected and analyzedPenalized            
GradingIterative masteryOne-shot evaluation
FeedbackDiagnostic       Judgmental
Identity ImpactTask-focused     Person-focused
RevisionBuilt-in       Often limited 

In many systems, a low grade becomes a fixed record rather than iterative data.  https://privsec.harvard.edu/data-lifecycle

Step 4: Theoretical Lenses to Explore

I would direct the student to engage with established theories:

This grounds the project academically.

Step 5: Comparative Approach

Encourage comparative design.

Button with the text 'LEARN MORE' and a cursor icon

Possible angles:

– Compare engineering education with liberal arts grading systems.

– Compare countries with high-stakes testing to those using mastery models.

– Compare entrepreneurial ecosystems to traditional schools.

– Interview students about emotional responses to failure.

A mixed-method approach could include:

– Survey on student beliefs

– Classroom policy analysis

– Teacher interviews

– Policy document review

Step 6: Explore Language and Framing

Have the student examine language:

– “Failing a test”

– “Not meeting standard”

– “Iteration”

– “Prototype”

– “Draft”

Language reflects worldview. In science, drafts are expected. In many schools, early drafts are penalized.

Step 7: Examine Structural Incentives

Failure culture is often incentive-driven.

A graphic featuring a button with the text 'Learn More' and a cursor icon, indicating a clickable link for additional information.

Ask:

– Do teachers have time to allow revision?

– Do rankings determine funding?

– Are students competing for scarce opportunities?

– Does grading function as sorting?

Systems built for selection treat failure differently than systems built for development.

Step 8: Include Historical Perspective

Historically:

– Industrial-era schooling emphasized sorting.

– Scientific culture emphasized experimentation.

– Entrepreneurial cultures increasingly celebrate failure narratives.

The student could trace how industrial education models conflict with innovation economies.

Step 9: Ethical and Psychological Dimensions

Explore consequences:

– Anxiety

– Risk aversion

– Cheating behavior

– Innovation suppression

When failure threatens identity, people avoid intellectual risk.

When failure is normalized, experimentation increases.

Step 10: Practical Application Component

Encourage the student to design a small intervention:

– Replace one graded test with iterative resubmission.

– Implement reflection journals after errors.

– Compare performance under mastery grading vs traditional grading.

This moves the research from critique to design.

A Possible Thesis Direction

A strong thesis might argue:

> Scientific culture treats failure as epistemic progress, while many educational cultures treat failure as evaluative sorting. This divergence shapes student risk-taking, innovation, and long-term resilience.

Final Guidance to the Student

I would tell the student:

– Do not romanticize failure.

– Distinguish between productive failure and preventable negligence.

– Analyze incentives, not just attitudes.

– Look for structural causes, not just beliefs.

– Focus on how systems convert error into either information or stigma.

The most powerful insight is this:

A rocket explosion in engineering is analyzed.

A failed exam in many schools is archived.

Your task is to investigate why those two institutional responses differ—and what happens when education begins to resemble engineering more than sorting.

Write a Paper

I continued the investigation with …

Would you outline a paper for that student that would address this ‘difference in view of failure’ in our culture. with a view of changes that would bring these two views together

…ChatGPT …

If pursued rigorously, this topic bridges psychology, sociology, anthropology, organizational science, and educational policy. It is intellectually rich and socially consequential.

This paper would examine how modern Western culture often treats failure as personal deficiency while Scientific culture treats failure as valuable information, and it would propose structural reforms to align educational systems more closely with the scientific model. The goal is not to eliminate evaluation, but to redesign systems so that error becomes developmental rather than stigmatizing.

Below is a detailed, research‑ready outline suitable for an upper‑level undergraduate or graduate paper.

Working Title

“From Stigma to Signal: Reconciling Cultural and Scientific Views of Failure in Education”

Abstract (150–250 words)

– State the central contrast: cultural failure as identity threat vs scientific failure as valuable data.

– Identify the educational system as the institutional bridge between these views.

– Present the thesis: educational reform can integrate evaluative accountability with iterative learning models drawn from scientific practice.

– Briefly note proposed reforms (mastery grading, structured iteration, reframing of error).

I. Introduction

– Introduce the metaphor: a rocket explosion generates research; a failed exam generates a record.

– Define “failure” in three domains:

– Cultural

– Scientific

– Educational

– Present research question:

– How do cultural narratives of failure shape educational structures, and how might scientific models of iteration inform reform?

– Thesis statement.

II. Conceptual Framework

—- A. Failure in Scientific Epistemology

– Hypothesis testing

– Falsifiability

– Iteration cycles

Include formal representation of the scientific cycle:

Diagram illustrating the steps of the scientific method, including observation/question, research topic area, hypothesis, test with experiment, analyze data, and report conclusions.

Emphasize:

– Error as necessary for knowledge production

– Failure as epistemic progress

— B. Failure in Cultural Psychology

Discuss:

Key distinction:

– “Task failure” vs “self failure”

— C. Failure in Educational Institutions

Analyze:

– Grading systems

– High-stakes testing

– Academic transcripts as permanent records

– Sorting vs development models

Argue that modern education often operationalizes cultural stigma rather than scientific iteration.

III. Structural Drivers of Failure Stigma

—- A. Industrial-Era Schooling

– Efficiency

– Standardization

– Ranking

—- B. Scarcity and Selection

– University admissions

– Credential inflation

– Competitive labor markets

—- C. Incentive Misalignment

Actor                         Incentive Effect on Failure    
________________________________________________________
Students   Avoid GPA damageRisk avoidance 
Teachers  Cover curriculum  Limited iteration 
InstitutionsRank performance  Sorting over mastery

IV. Psychological Consequences

—- A. Risk Aversion

– Reduced intellectual experimentation

—- B. Identity Threat

– Failure linked to self-worth

—- C. Innovation Suppression

– Avoidance of challenging tasks

– Preference for guaranteed success

Contrast with scientific culture, where risk and iteration are structurally protected.

V. Case Studies and Comparative Models

Include examples such as:

– Mastery-based grading systems

– Engineering design studios

– Entrepreneurial incubators

– Competency-based education models

Analyze how these environments:

– Normalize iteration

– Decouple early failure from permanent record

– Provide diagnostic feedback

VI. Integrative Reform Proposal

—- A. Redefining Failure as Iterative Feedback

Propose institutional shifts:

– Replace single high-stakes exams with iterative submissions.

– Build structured revision cycles into syllabi.

– Separate formative assessment from summative records.

—- B. Grading Reform

Traditional Model                     Iterative Model
______________________
One-shot exam              Multiple attempts                        
Static GPA record          Mastery tracking
Penalize early mistakesReward improvement trajectory
Performance ranking Competency certification  

—- C. Transcript Redesign

Propose:

– Mastery badges

– Skill progression metrics

– Narrative evaluation components

—  D. Cultural Reframing

Encourage:

– Public celebration of revision

– Reflection essays on errors

– Faculty modeling of intellectual humility

VII. Potential Criticisms and Constraints

Address:

– Grade inflation concerns

– Resource and time constraints

– Competitive external pressures

– Risk of lowering standards

Argue that rigor and iteration are not opposites.

VIII. Synthesis: Toward a Hybrid Model

Propose a reconciliation:

– Maintain accountability.

– Preserve evaluation.

– Embed structured iteration.

– Convert early failure into protected developmental space.

Express the hybrid learning model as:

“E = f(A, I, F)“

Where:

‘E’ = educational outcome

– ‘f’ = ‘is a function of …’

– ‘A’ = accountability

– ‘I’ = iteration

– ‘F’ = feedback quality

Argue that maximizing ‘E’ requires balancing all three variables.

IX. Conclusion

– Restate the core contrast: stigma vs signal.

– Argue that educational systems currently reflect cultural anxiety about failure more than scientific logic.

– Emphasize that innovation economies require risk-tolerant learners.

– Conclude that reconciling these views is both pedagogically and socially strategic.

Suggested Methodology (Optional Section)

If empirical:

– Survey student beliefs about failure.

– Analyze syllabi for revision opportunities.

– Compare GPA models across institutions.

– Conduct interviews with STEM vs non-STEM faculty.

A clickable 'Learn More' button with a cursor icon, designed for web navigation.

Final Framing Advice to the Student

Encourage the student to:

– Avoid idealizing science or demonizing education.

– Analyze systems rather than individuals.

– Distinguish emotional, structural, and epistemological dimensions of failure.

– Offer realistic reforms, not utopian proposals.

The strongest version of this paper will show that the difference in views of failure is not merely philosophical—it is institutional. And institutions can be redesigned.

The aim is not to remove standards, but to align education with the logic of discovery: error analyzed, knowledge refined, progress documented.

For Example

I continued with …

In the test world of engineering there are typically two types of test failures that can occur the type one failure would be where you would pass a faulty object and the type two would be where you would fail a good object there’s always been a trade-off consideration and engineering of establishing your testing limits to where you avoid both of those extremes can that be used as a guide in restructuring the cultures view of education educational failure.

…ChatGPT …

In engineering and statistics, “Type I and Type II errors” describe two fundamental kinds of decision mistakes. A “Type I error” is a false positive—accepting or passing something that is actually defective. A “Type II error” is a false negative—rejecting or failing something that is actually sound. The tension between these errors creates a design trade-off: tightening standards reduces one error while increasing the other.

That framework can serve as a powerful guide for rethinking how educational systems define and respond to “failure.”

The Engineering Analogy Applied to Education

In education, we can map the analogy as follows:

– “Educational Type I Error (False Positive)”

Passing or advancing a student who has not mastered the required knowledge or skills.

– “Educational Type II Error (False Negative)”

Failing, retaining, or labeling as incapable a student who actually has the capacity or partial mastery to succeed.

Just as in engineering, eliminating one error entirely increases the likelihood of the other. If standards are too lenient, competency erodes. If standards are too rigid, capable learners are misclassified and discouraged.

The key insight is that “assessment design is not about eliminating error, but optimizing trade-offs” in light of broader system goals.

Reframing Educational Failure as Risk Management

Engineering cultures approach error probabilistically and systematically. Education often treats failure morally or personally. Applying the engineering model shifts the perspective from blame to calibration.

“Implications for restructuring educational culture:”

– Failure becomes a signal of system miscalibration, not a defect in the individual.

– Assessment thresholds are understood as policy choices with trade-offs.

– The cost of each error type is explicitly evaluated.

For example:

 Error Type                Educational ExampleShort-Term Effect Long-Term Cost  
Type I (False Positive) “Consumer’s Risk”  Advancing a student without literacy masteryMaintains promotion rates     Skill gaps compound over time 
Type II (False Negative)“Producer’s Risk”Retaining a student who could progress with support Protects academic standardsMotivation loss, dropout risk 

This framing forces a cultural question: “Which error is more costly to society?”

Adjusting Thresholds Through Support Systems

In engineering, when error trade-offs become unacceptable, designers do not merely move thresholds—they redesign the system. Education can follow the same logic.

Instead of tightening or loosening grading standards alone, systems can:

– Introduce “formative assessments” that reduce uncertainty before high-stakes decisions.

– Provide “iterative feedback loops”, analogous to quality control sampling.

– Offer “remediation pathways” that reduce the cost of Type I errors.

  • Implement “multiple measures of competence” to reduce Type II misclassification.
  • Relabel as ‘Producer’ vs ‘Consumer’ risks and declare

In statistical terms, increasing the ‘signal-to-noise ratio’ improves both sensitivity and specificity without relying solely on stricter cutoffs. Some call this a ‘Hardware’ fix.

Cultural Implications

Engineering cultures accept that some error is unavoidable. Educational cultures often stigmatize it. Reframing educational failure using the Type I/Type II model produces several shifts:

– From “punitive evaluation” to “diagnostic calibration”.

– From “binary success/failure labels” to probabilistic developmental models.

– From “single-shot testing” to iterative validation of competence.

Failure becomes part of system optimization rather than personal deficiency.

Ethical and Strategic Considerations

Unlike engineered objects, students are adaptive agents. A Type II error in education—incorrectly labeling a student as incapable—can reshape identity, motivation, and opportunity. That makes its social cost potentially higher than a typical engineering false negative.

However, persistent Type I errors—credentialing without competence—erode institutional trust and societal capability.

Therefore, restructuring educational culture requires:

– Explicit articulation of error tolerance.

– Alignment between assessment design and long-term societal goals.

– Continuous recalibration using outcome data.

Conclusion

The engineering trade-off between Type I and Type II failures provides a rigorous conceptual lens for rethinking educational failure. It reframes grading and advancement decisions as threshold-setting under uncertainty rather than judgments of worth.

Conclusion Conclusion

By treating educational assessment as a system design problem—balancing risks, redesigning feedback loops, and clarifying error costs—institutions can move from a culture of blame to a culture of calibrated improvement.

‘You say that you want More’ …

https://en.wikipedia.org/wiki/Failure

https://en.wikipedia.org/wiki/Break.com

https://en.wikipedia.org/wiki/Wrongdoing

https://en.wikipedia.org/wiki/Winner_and_loser_culture

https://en.wikipedia.org/wiki/Zero-sum_game

https://en.wikipedia.org/wiki/Grading_in_education

https://en.wikipedia.org/wiki/Nash_equilibrium

https://en.wikipedia.org/wiki/Rejection

Formal Thought

Leave a comment

I became fascinated by the realization that the “Formal Thought” process was something that not all adult humans had developed.  And further; that the stages of “cognitive development” that Psychologists like Piaget and others have studied in depth, appear to be consistent among human children.  Much like the physical growth stages.

So I assigned my assistant ChatGPT to the task of exploring the history of studies about “Formal Thought” development in humans.  Then to further investigate the recommendations of the experts about how to recognize and assist the development of this ability.  What follows is it’s findings and my direction of it’s inquiry.  I required that citations be provided for the sources of it’s conclusions.  Here are our findings …

ChatGPT and I found this …

Formal operational thinking does not arrive like a switch flipping on. It typically appears gradually—often unevenly—across domains, tasks, and contexts. A parent or teacher’s role is less about identifying a precise “onset” and more about recognizing emerging capacities for abstraction and then deliberately scaffolding them.

Below is a practical, research-informed guide to recognizing and nurturing formal thought in adolescents.  Adults may benefit as well …

How to Recognize the Emergence of Formal Thought

Look for qualitative shifts in how the student approaches problems, not just whether they get correct answers.

“Cognitive Indicators”

– “Hypothetical reasoning”: The student considers “what if” scenarios without needing concrete examples.

– “Systematic testing of variables”: In experiments or arguments, they change one factor at a time and track outcomes.

– “Abstract concepts”: Comfort with ideas like justice, infinity, identity, probability, or ideology.

– “Propositional logic”: Ability to evaluate statements based on internal logic rather than real-world plausibility.

– “Metacognition”: Awareness of their own thinking process (“I assumed X, but that may not hold.”).

– “Future-oriented reasoning”: Long-term planning and projection of consequences.

“Behavioral and Social Indicators”

– Increased interest in philosophical, political, or ethical debates

– Questioning of authority based on principles rather than emotion

– Greater sensitivity to contradictions or inconsistencies

– Exploration of identity at a conceptual level

These signs often appear first in domains of personal relevance (e.g., social justice, identity, friendships) before showing up in formal academic reasoning.

What Does Not Indicate Formal Thought

– Simply arguing more

– Emotional intensity

– Memorizing abstract terminology

– Performing well only on structured test items

True formal reasoning involves “coordination of multiple variables”, not just verbal sophistication.

How to Encourage and Nourish Formal Thought

The goal is not to accelerate development artificially, but to provide the right cognitive environment.

“1. Invite Hypothetical Thinking”

Use open-ended prompts:

– “What might happen if…?”

– “What assumptions are we making?”

– “Can you think of an alternative explanation?”

Encourage exploration of possibilities without rushing to closure.

“2. Teach Systematic Problem Solving”

Model structured reasoning:

– Identify variables

– Generate hypotheses

– Test one variable at a time

– Evaluate evidence

In science classes, explicitly teach control-of-variables strategies.

In humanities, analyze competing interpretations systematically.

“3. Encourage Argumentation with Evidence”

Formal thought grows through structured debate.

– Ask students to defend a claim with reasons.

– Require consideration of counterarguments.

– Ask them to revise positions when presented with stronger evidence.

The emphasis should be on reasoning quality, not winning.

“4. Develop Metacognition”

Prompt reflection:

– “How did you arrive at that conclusion?”

– “What strategy did you use?”

– “Where might your reasoning break down?”

Metacognitive discussion strengthens executive control and abstract reasoning.

“5. Introduce Complexity Gradually”

Present problems involving:

– Multiple interacting variables

– Conflicting perspectives

– Probabilistic outcomes

Avoid oversimplification. Complexity stimulates formal reasoning.

“6. Normalize Uncertainty”

Adolescents often seek certainty once they discover abstract reasoning. Encourage comfort with ambiguity:

– Discuss gray areas.

– Examine trade-offs.

– Highlight that intelligent people can disagree.

This helps move toward more advanced dialectical thinking.

“7. Provide Cognitive Autonomy”

Formal thought flourishes when students:

– Generate their own questions

– Design their own experiments

– Explore independent research topics

– Take ownership of learning

Overly directive instruction can suppress emerging abstract reasoning.

The Role of Emotion and Identity

Adolescence is also a period of identity formation. Abstract thinking often becomes most energized around:

– Moral issues

– Social justice

– Belief systems

– Personal values

Rather than dismiss intensity, channel it into structured inquiry and reflective dialogue.

What to Avoid

– Ridiculing early attempts at abstraction

– Overcorrecting logical errors harshly

– Forcing adult-level reasoning prematurely

– Equating skepticism with disrespect

Supportive challenge is more effective than confrontation.

A Developmental Reality Check

Formal operational thought:

– Appears unevenly

– Strengthens with education

– Depends heavily on practice

– Is not consistently used even by adults

Your role is to create conditions where:

– Working memory is supported (clear structure, reduced overload)

– Reflection is encouraged

– Evidence-based reasoning is modeled

– Dialogue is respectful and intellectually rigorous

In Summary

You recognize the onset of formal thought when a young person begins to:

– Think in possibilities rather than only realities

– Coordinate multiple variables

– Reflect on their own reasoning

– Argue from principle rather than impulse

You nourish it by:

– Encouraging hypothetical exploration

– Teaching structured reasoning

– Cultivating metacognition

– Introducing complexity

– Supporting autonomy and reflective dialogue

Formal thought develops through practice in intellectually rich environments. When adults provide challenge, structure, and respect for emerging reasoning, the adolescent mind expands accordingly.

Child Development Psychology Investigations

Jean Piaget’s theory of cognitive development—particularly the “formal operational stage”—remains one of the most influential frameworks in developmental psychology. Over the past several decades, and especially in recent years, research has not discarded Piaget’s ideas but has significantly “refined, qualified, and expanded” them. The contemporary view is more nuanced, emphasizing gradual development, domain specificity, sociocultural influences, and neurobiological maturation rather than a single stage shift.

Below is a structured overview of how the field has evolved, with particular focus on formal operational thought.

Overview of Piaget’s Formal Operational Stage

Piaget proposed that the “formal operational stage” begins around age 11–12 and is characterized by:

– Abstract reasoning

– Hypothetico-deductive thinking

– Systematic problem solving

– Propositional logic

– Ability to think about possibilities rather than only concrete realities

He described it as a qualitative cognitive shift from concrete to abstract reasoning.

Modern research has both supported and challenged aspects of this formulation.

Is Formal Operational Thought a Distinct Stage?

“From Stage-Like to Gradual Development”

Contemporary research suggests that:

– Cognitive development is “more gradual than stage-like”

– Abstract reasoning emerges “incrementally and unevenly”

– Adolescents may demonstrate formal reasoning in some domains but not others

This has led to a shift from a rigid stage model to a “domain-specific and skills-based perspective”.

For example:

| Piaget’s View                                   | Contemporary View                                                   |

|———————                                |—————————                                                    |

| Universal stage around age 11–12 | Gradual, variable emergence                                     |

| Qualitative shift                               | Quantitative improvements plus strategy changes     |

| Applies broadly across domains     | Often domain-specific                                                 |

| Logical reasoning central                | Executive function and working memory also crucial |

Many adults do not consistently use formal operational reasoning in everyday contexts, which further complicates the idea of a universal stage.

Neurodevelopmental Research and Brain Maturation

One of the most significant expansions since Piaget’s time comes from “cognitive neuroscience”.

“Adolescent Brain Development”

Neuroimaging research shows:

– Continued maturation of the “prefrontal cortex” into the mid-20s

– Increased connectivity between frontal and parietal regions

– Improvements in executive functions such as:

– Working memory

– Inhibitory control

– Cognitive flexibility

These neural changes support:

– Hypothetical reasoning

– Multi-variable problem solving

– Metacognition

Rather than a sudden cognitive shift, brain development suggests a “protracted developmental trajectory”.

Is There a Specific Trigger Like Puberty?

This is a critical question.

“Puberty as a Contributing Factor”

Research indicates:

– Pubertal hormonal changes influence brain remodeling

– Hormones such as testosterone and estradiol affect neural plasticity

– Puberty correlates with increased sensation-seeking and social cognition shifts

However:

– There is “no single biological trigger” that automatically ushers in formal operational thought

– Chronological age, schooling, cultural context, and cognitive stimulation all play major roles

– Executive function improvements often precede or extend beyond puberty

In short, puberty is “associated with” but does not deterministically cause formal operational reasoning.

Dual-Process and Information-Processing Models

Since Piaget, cognitive science has developed models that reinterpret formal reasoning through different lenses.

“Information-Processing Perspective”

Instead of stages, this approach emphasizes:

– Working memory capacity

– Processing speed

– Strategy acquisition

– Knowledge base expansion

Adolescents become capable of abstract reasoning partly because:

– Working memory increases

– They can coordinate multiple variables simultaneously

– They acquire formal logic strategies

“Dual-Process Theory”

Modern research distinguishes between:

– “System 1”: Intuitive, fast thinking

– “System 2”: Analytical, reflective thinking

Formal operational reasoning aligns closely with “System 2 processing”, which:

– Develops gradually

– Requires cognitive effort

– Depends on executive control

This model helps explain why even adults often revert to intuitive reasoning.

Postformal Thought

Some theorists have proposed a stage beyond Piaget’s formal operations.

Characteristics of Postformal Thought

– Dialectical reasoning

– Integration of emotion and logic

– Tolerance of ambiguity

– Context-sensitive thinking

– Recognition of relativism

Unlike formal operations, postformal reasoning:

– Is not universally achieved

– Often emerges in adulthood

– Depends heavily on education and life experience

This extension reflects dissatisfaction with viewing formal logic as the endpoint of cognitive maturity.

Sociocultural and Educational Influences

Vygotskian and contemporary sociocultural perspectives emphasize:

– Schooling as a major driver of formal reasoning

– Cultural variability in abstract reasoning

– Instruction in algebra and scientific thinking as scaffolding

Cross-cultural studies show that:

– Formal operational reasoning is more common in populations with formal education in science and mathematics

– It is not uniformly observed across all cultures

This challenges the universality of Piaget’s timeline.

Domain-Specific and Expertise Models

Research over the past two decades increasingly supports:

– Formal reasoning develops within “specific knowledge domains”

– Expertise can produce advanced reasoning in narrow areas

– Adolescents may reason abstractly in social domains before scientific domains

For example:

– Teenagers often show sophisticated reasoning about social identity and morality

– Yet struggle with formal scientific variable control tasks

Thus, development is **context-sensitive rather than globally staged**.

Metacognition and Adolescent Development

A major contemporary expansion involves “metacognition”:

– Thinking about thinking

– Monitoring and regulating cognitive strategies

– Awareness of one’s own reasoning limits

Adolescence is marked by growth in:

– Self-reflection

– Epistemic awareness

– Understanding of uncertainty and evidence

This aligns with, but is not identical to, Piaget’s formal operations.

Current Consensus

The modern position in developmental science can be summarized as follows:

– Formal operational reasoning is “real but variable”

– It emerges gradually rather than at a fixed age

– It is supported by neurobiological maturation

– Puberty contributes but is not a single trigger

– It is shaped by education and culture

– It is domain-specific rather than universal

– Many adults do not consistently use formal reasoning

Rather than rejecting Piaget, researchers now see him as having identified an important developmental direction—“increasing abstraction and hypothetico-deductive reasoning”—while overestimating its universality and stage-like structure.

Direct Answer to Your Central Question

There is “no specific date, age, or biological switch” that initiates formal operational thought. While puberty and adolescent brain development contribute to the capacity for abstract reasoning, the emergence of formal thought depends on:

– Neural maturation

– Executive function growth

– Educational exposure

– Cultural context

– Domain-specific knowledge

Formal operational reasoning is now understood as a “gradual, scaffolded developmental achievement”, not a sudden transformation.

Closing Perspective

Piaget’s insight that adolescence introduces a new level of abstraction remains foundational. What has changed is the understanding of ‘how’ and ‘when’ that capacity emerges. The field now sees formal thought as:

– Gradual

– Variable

– Context-dependent

– Neurodevelopmentally supported

– Educationally shaped

In that sense, Piaget was directionally correct—but the developmental story has become far more dynamic, biologically grounded, and culturally embedded than originally proposed.

Helpfull Activities

Formal operational thinking strengthens through repeated exposure to abstraction, hypothesis testing, strategic planning, and evidence-based reasoning. While no game or show “causes” formal thought, certain activities reliably exercise the underlying cognitive capacities—working memory, multi-variable coordination, probabilistic reasoning, and metacognition.

Below is a structured guide to games, activities, and selected media that can meaningfully support progression toward more advanced abstract reasoning.

Strategy and Logic Games

These require planning, conditional reasoning, and systematic thinking.

“Board and Tabletop Games”

– “Chess” – Develops multi-step planning, counterfactual thinking (“If I move here, then…”), and working memory.

– “Go” – Encourages large-scale pattern recognition and long-term strategic abstraction.

– “Settlers of Catan” – Involves probability tracking, negotiation, and multi-variable resource management.

– “Pandemic” – Cooperative planning with conditional forecasting and shared hypothesis testing.

– “Risk” – Encourages probabilistic reasoning and strategic anticipation.

– “Mastermind” – Classic hypothetico-deductive reasoning and elimination logic.

These games are effective because they require systematic testing of possibilities rather than impulsive action.

Logic and Deduction-Based Games

These directly exercise formal reasoning structures.

– “Sudoku” – Constraint satisfaction and logical elimination.

– “KenKen” – Numerical abstraction and rule integration.

– “Logic grid puzzles” – Multi-variable coordination and symbolic reasoning.

– “Clue (Cluedo)” – Deductive elimination and probabilistic inference.

– “Escape room games” – Integrate symbolic reasoning and collaborative hypothesis testing.

The key feature is structured elimination based on rules rather than guesswork.

Digital and Video Games That Promote Systems Thinking

Certain video games are cognitively demanding in ways aligned with formal reasoning.

– “Portal / Portal 2” – Spatial abstraction, logical sequencing, and constraint-based reasoning.

– “Civilization (series)” – Long-term strategic modeling of complex systems.

– “Kerbal Space Program” – Physics modeling, experimentation, and variable testing.

– “Factorio / Satisfactory” – Systems optimization and multi-variable coordination.

– “The Witness” – Pattern abstraction and rule inference.

These games are strongest when students articulate their reasoning rather than simply play intuitively.

Debate and Argumentation Activities

Formal thought is especially strengthened by structured discourse.

– “Debate clubs” – Evidence evaluation and counterargument development.

– “Model United Nations” – Policy reasoning and multi-perspective thinking.

– “Mock trials” – Argument construction under logical constraints.

– “Socratic seminars” – Metacognitive dialogue and conceptual abstraction.

Argumentation research shows that reasoning develops most when students must justify and revise positions.

Scientific and Experimental Activities

Hypothetico-deductive reasoning is central to formal thought.

– Designing simple experiments where only one variable changes

– Probability experiments with dice or simulations

– Coding projects (e.g., Python or Scratch) that require logical sequencing

– Robotics kits that require systems troubleshooting

The emphasis should be on planning and prediction before testing.

Television and Media That Encourage Abstract Thinking

Passive viewing is less powerful than interactive engagement, but certain series can stimulate abstract reasoning if discussed afterward.

“Science and Systems-Based Shows”

– “MythBusters” – Hypothesis testing and experimental design.

– “Cosmos” (various versions) – Abstract scientific reasoning and large-scale conceptual thinking.

– “NOVA” – Scientific explanation and evidence evaluation.

“Complex Narrative Series”

Shows with layered plots can support multi-perspective reasoning:

– “Sherlock” – Deductive reasoning.

– “Black Mirror” – Ethical hypotheticals and technological abstraction.

– “The Good Place” – Moral philosophy framed in accessible narratives.

The enhancement comes primarily from post-viewing discussion:

– What assumptions were made?

– What alternative explanations exist?

– What ethical principle applies?

Philosophy and Thought Experiments

Even brief exposure to structured thought experiments strengthens abstraction.

The trolley problem

https://en.wikipedia.org/wiki/Trolley_problem

Brain-in-a-vat scenarios

https://en.wikipedia.org/wiki/Brain_in_a_vat

Prisoner’s dilemma simulations

https://en.wikipedia.org/wiki/Prisoner’s_dilemma

Moral dilemma case studies

https://en.wikipedia.org/wiki/Ethical_dilemma

These cultivate propositional reasoning detached from immediate reality.

Creative Activities That Promote Formal Thinking

– Writing alternate-history scenarios

– Designing a new society with defined rules

– Creating strategy games

– Mathematical proof exploration

– Coding simulations

Creativity becomes developmentally powerful when it requires rule-based systems rather than pure expression.

The Most Powerful Factor: Guided Reflection

The activity itself matters less than the cognitive framing.

After any game or show, ask:

– What variables influenced the outcome?

– What strategy did you test?

– What would happen if we changed one rule?

– What assumptions were hidden?

Metacognitive dialogue transforms entertainment into cognitive training.

Activities Ranked by Direct Engagement with Formal Reasoning

| Activity Type                | Level of Formal Reasoning Engagement|

|———————            |—————————————————— |

| Structured debate        | Very High                                               |

| Experimental design    | Very High                                                 |

| Strategy board games | High                                                          |

| Video games                | High                                                         |

| Logic puzzles               | Moderate–High                                      |

| Passive TV viewing      | Low (unless discussed reflectively)       |

Final Perspective

Formal thought develops through:

– Coordinating multiple variables

– Testing hypothetical possibilities

– Reflecting on reasoning

– Engaging with structured complexity

The most effective tools are those that require planning, abstraction, and justification—not just reaction speed or memorization. When games and media are paired with reflective dialogue, they become powerful scaffolds for advancing formal reasoning capacity.

Research History

Research on formal operational thought has evolved into broader investigations of adolescent cognitive development, abstract reasoning, executive function, and neurocognitive maturation. While few researchers today use Piaget’s exact terminology, many leading scholars are studying the mechanisms underlying what Piaget described as the “formal operational phase”. Below is a curated list of prominent contemporary researchers whose work directly relates to this area.

Cognitive and Neurodevelopmental Researchers

These scholars focus on adolescent brain maturation, executive function, and abstract reasoning capacity.

– “Laurence Steinberg” – Temple University

Research on adolescent brain development, risk-taking, and psychosocial maturity; integrates neuroscience with cognitive development.

– “B. J. Casey” – Yale University

Studies prefrontal cortex maturation and cognitive control in adolescence using neuroimaging.

– “Ronald E. Dahl” – University of California, Berkeley

Focuses on puberty-related brain development and its interaction with cognition and emotion.

– “Beatriz Luna” – University of Pittsburgh

Researches developmental trajectories of executive function and cognitive control using longitudinal neuroimaging.

– “Jay Giedd” – Formerly NIH; now at UC San Diego

Pioneer in adolescent brain imaging research documenting structural brain changes through adolescence.

Executive Function and Information-Processing Models

These researchers explore working memory, cognitive flexibility, and control processes that support formal reasoning.

– “Adele Diamond” – University of British Columbia

Leading authority on executive functions and their developmental progression.

– “Philip D. Zelazo” – University of Minnesota

Known for work on cognitive control, metacognition, and reflective processing in childhood and adolescence.

– “Nelson Cowan” – University of Missouri

Studies working memory development and capacity limits across development.

– “Robert Siegler” – Carnegie Mellon University

Prominent for strategy-based and information-processing approaches to cognitive development.

Scientific Reasoning and Hypothetico-Deductive Thinking

These researchers study how children and adolescents reason about evidence, variables, and scientific problems—core features of formal operational thought.

– “David Klahr” – Carnegie Mellon University

Research on scientific reasoning and control-of-variables strategy development.

– “Deanna Kuhn” – Columbia University

Studies epistemic development, argumentation, and adolescent reasoning about evidence.

– “Clark Chinn” – Rutgers University

Focuses on epistemic cognition and scientific reasoning processes.

– “Jonathan T. Evans” – University of Plymouth

Works on dual-process theory and analytic reasoning.

Epistemic Development and Metacognition

These scholars investigate how adolescents understand knowledge, uncertainty, and relativism—extensions of formal operational reasoning.

– “Baruch Fischhoff” – Carnegie Mellon University

Researches adolescent decision-making and risk reasoning.

– “Patricia King” – Co-developer of the Reflective Judgment Model

Studies postformal and epistemic reasoning in adolescence and adulthood.

– “Kurt Fischer” (deceased, but highly influential) – Harvard University

Proposed dynamic skill theory as an alternative to rigid stage models.

Neo-Piagetian Theorists

These researchers explicitly extend Piaget’s framework while integrating modern cognitive science.

– “Juan Pascual-Leone” – University of York

Developed mental attention theory linking working memory growth to stage progression.

– “Robbie Case” (deceased, foundational neo-Piagetian)

Integrated working memory capacity with stage development.

– “Graeme Halford” – University of Queensland

Research on relational complexity theory and limits of abstract reasoning.

Sociocultural and Educational Perspectives

These scholars examine how schooling and cultural context shape abstract reasoning.

– “Michael Tomasello” – Duke University

Focuses on social cognition and cultural learning mechanisms.

– “Barbara Rogoff” – University of California, Santa Cruz

Studies cultural influences on cognitive development.

– “Daniel Willingham” – University of Virginia

Researches how formal reasoning and critical thinking are taught in educational settings.

Trends in Current Research

Contemporary research on formal operational reasoning often appears under the following themes:

– Executive function development

– Adolescent brain maturation

– Scientific reasoning and argumentation

– Epistemic cognition

– Dual-process models of reasoning

– Relational complexity theory

– Metacognitive development

– Decision science in adolescence

The terminology has shifted, but the core question—how humans develop abstract, hypothetical, and systematic reasoning—remains active and well-studied.

Summary

There is no single field labeled “formal operational research” today. Instead, the legacy of that concept lives on across:

– Developmental cognitive neuroscience

– Executive function research

– Scientific reasoning studies

– Epistemic and metacognitive development

– Neo-Piagetian cognitive capacity models

If your interest lies specifically in the ‘emergence of abstract reasoning in adolescence’, researchers like “Deanna Kuhn, David Klahr, Philip Zelazo, B. J. Casey, Beatriz Luna, and Laurence Steinberg” would be particularly central figures in contemporary work related to what Piaget described as the formal operational stage.

The field has shifted from asking “When does the stage begin?” to asking “What mechanisms—neural, cognitive, educational, and cultural—enable increasingly abstract reasoning?”

Further Inquiry

https://www.verywellmind.com/piagets-stages-of-cognitive-development-2795457

https://practicalpie.com/formal-operational-stage

https://www.ebsco.com/research-starters/history/jean-piagets-theory-cognitive-development

https://www.britannica.com/science/formal-operational-stage

https://www.medicalnewstoday.com/articles/325030

https://socialsci.libretexts.org/Courses/North_Central_State_College/PSYC_2030_Child_Psychology/10:_Cognitive_Theory/10.04:_Piaget-_Formal_Operational_Stage

LUCK Control

Leave a comment

This collection of links and information may help you move from allowing luck to direct many of your life events. To controlling random events to achieve your goals and obtaining more success in your life. This is by no means a tutorial for being successful.  It is a collection of thoughts about luck and the control we have over it.  It is for you to explore these thoughts, and others, to find your own path to success.  It can be done; and not that difficult for most.

First we look at randomness.  Then we realize that raw randomness is not that common, and extremely difficult to achieve intentionally.  There are numerous forces that create cycles and biases which direct the random behaviors into predictable consequences.  Some of these are easily recognized, such as weather, seasons, gravity, …  others are less obvious.  It is in the recognition of, and the syncing with, these forces that will give you a reasonable amount of control over the events of your life.

We look at probability to determine the likely-hood of a particular event happening or not.  The calculations are not always intuitive, but can be very helpful in analyzing the situation.  In the Monty-Hall problem even some professional mathematicians got it wrong.  That problem is perfect for our study, it is very much counterintuitive.

Next we look at your state of mind and body required to manipulate forces to obtain the desired conclusion.  Many manipulation interventions requires skill, talent, muscle control, emotional stability, experience, self assurance, patience, maturity, unobstructed thought processes, … random luck may still play a part, but the objective is to reduce dependance upon it as much as possible.  You must think beyond the personal effects of the manipulation and account, with empathy, for everything else.  You must realize the privilege of your life, and behave without taking unfair advantage.  Taking candy from a baby is no accomplishment. You must recognize an opportunity and prepare yourself for capitalizing on them. Basically, you position yourself in the path of good luck and out of the way of bad luck.

Uncertainty will always exist and must be permitted and even encouraged.  Any worthwhile development is an iterative process.  You try something, evaluate its effectiveness, modify the manipulation, and try again.  Failures are permitted, encouraged, expected, and embraced for their educational value.  The best solution is rarely the initial attempt.  You learn by making mistakes.  Rapid prototyping is often required.

You are then ready to take the gamble, where you risk something of value, in order to potentially achieve something of more value.  Timing may be an important element in your plan.  Almost all actions, physical or otherwise, carry a degree of a gamble.  You decide to act based upon your assessment of the risk and the reward.  Usually it is a question of trade-offs.  You can’t have the best of all dimensions.  “maximize the positives, eliminate the negatives, and don’t mess with Mr. in-between.”

Now comes the magic.  From the earliest of times, people have attempted to influence or anticipate future events based upon magical manipulations and understandings.  Prediction of the future is perhaps the most basic of human activities.  What will happen next? Modern science follows from these early efforts.  Science is like magic, only real!

Determining when success has been achieved is perhaps the most difficult task.  Goals achieved commonly carry a bittersweet aspect.  Success is not an either/or situation, it is measured by degrees. What opportunities were missed in order to reach a goal?  Was it worth the sacrifices; who got hurt; how do you measure; is this the completion of a single iteration of many?  Is this success sustainable and survivable?  What was lost in the process?  Many of these questions are not answerable until considerable time has passed. Unfortunately the answer will depend upon your individual (selfish) perspective. Be aware of the choices you make.

Now it is time for you to explore the related topics, at your leisure.

CONTENTS

Luck

Chance

Randomness

Random Number Generator

Probability

Monty Hall Problem

Cell Growth

Confidence 

Self Control

Empathy

Privilege

Opportunism

Uncertainty

Gambling

Lottery

Magic

Success

Loss Function

Victory

Survival

Performance

Infinity & Beyond

The format of this collection is as follows:

A general topic is identified

A brief definition of the topic is provided

A hyperlink is provided, by clicking on a picture or word, to a wikipedia article about the topic

A table of contents of the wikipedia article is provided

The intent is that you will learn using a self directed approach.  You are encouraged and enabled to ‘drill down’ into a topic until you have satisfied or exhausted your curiosity.  The material is not intended to be consumed sequentially or in any prescribed order.  You are in control, and can jump around as you wish.  Your needs and passions are for you to determine. This approach should serve the mature student very well.

Lucky – Beinging aware of something nice happening, to you; which you didn’t earn.  

You did nothing – something nice happened – you are lucky – no skill required – wishes – will

To gain success or something desirable by chance

1. The chance happening of fortunate or adverse events; fortune

2. Good fortune or prosperity; success

3. One’s personal fate or lot

  • Etymology and definition
    Lack of control
    Fallacy
    Essence
    Self-fulfilling prophecy
    Social aspects
    Games
    Lotteries
    Means of resolving issues
    Numerology
    Science
    In religion and mythology
    Buddhism
    Christianity and Judaism
    Hinduism
    Islam
    Belief measurement
    See also
    Bibliography
    References
    External links

Chance – random likely-hood of something happening – randomness

1.

a. The unknown and unpredictable element in happenings that seems to have no assignable cause.

b. A force assumed to cause events that cannot be foreseen or controlled; luck: Chance will determine the outcome.

2. often chances The likelihood of something happening; possibility or probability: Chances are good that you will win. Is there any chance of rain?

3. An accidental or unpredictable event: “He sprang to his feet and turned to run. At the same moment by a lucky chance the moon broke through the clouds” (Arthur Conan Doyle).

4. A favorable set of circumstances; an opportunity: a chance to escape.

5. A risk or hazard; a gamble: took a chance that the ice would hold me.

6. Games A raffle or lottery ticket.

7. Baseball An opportunity to make a putout or an assist that counts as an error if unsuccessful.

  • Definition
    Discrete probability distribution
    Example
    Continuous probability distribution
    Relationship between the likelihood and probability density functions
    In general
    Likelihoods for mixed continuous–discrete distributions
    Regularity conditions
    Likelihood ratio and relative likelihood
    Likelihood ratio
    Relative likelihood function
    Likelihood region
    Likelihoods that eliminate nuisance parameters
    Profile likelihood
    Conditional likelihood
    Marginal likelihood
    Partial likelihood
    Products of likelihoods
    Log-likelihood
    Graph
    Likelihood equations
    Exponential families
    Example: the gamma distribution
    Background and interpretation
    Historical remarks
    Interpretations under different foundations
    Frequentist interpretation
    Bayesian interpretation
    Likelihoodist interpretation
    AIC-based interpretation
    See also
    Notes
    References
    Further reading
    External links
  • Randomness

Whats going to happen next?

  • History
  • In science
  •           In the physical sciences
               In biology
               In mathematics
               In statistics
               In information science
               In finance
               In politics
            Randomness and religion
            Applications
            Generation
            Measures and tests
    Misconceptions and logical fallacies
    Fallacy: a number is “due”
            Fallacy: a number is “cursed” or “blessed”
            Fallacy: odds are never dynamic
    See also
    Notes
    References
    Further reading
    External links

Random Number Generator

It is VERY difficult to produce a true Random Number Generator

Patterns creep in. The recognition of these patterns will lead you to success.

  • Practical applications and uses
    “True” vs. pseudo-random number
    • Physical methods
      Computational methods
      By humans
    • Post-processing and statistical checks
      • Other considerations
        Reshaping the distribution
        Uniform distributions
        Other distributions
        Whitening
      • Low-discrepancy sequences as an alternative
        Activities and demonstrations
        Backdoors
        See also
        References
        Further reading
        External links

Probability – the statistical prediction of an events happening

1. the condition of being probable

2. an event or other thing that is probable

3. (Statistics) statistics a measure or estimate of the degree of confidence one may have in the occurrence of an event, measured on a scale from zero (impossibility) to one (certainty). It may be defined as the proportion of favorable outcomes to the total number of possibilities if these are indifferent (mathematical probability), or the proportion observed in a sample (empirical probability), or the limit of this as the sample size tends to infinity (relative frequency), or by more subjective criteria (subjective probability)

  • Interpretations
    Etymology
    History
    Theory
    Applications
    Mathematical treatment
    Independent events
    Mutually exclusive events
    Not mutually exclusive events
    Conditional probability
    Reverse probability
    Summary of probabilities
    Relation to randomness and probability in quantum mechanics
    See also
    Notes
    References
    Bibliography
    External links
Monty Hall Problem

Another way of thinking about it

  • Paradox
    Standard assumptions
    Simple solutions
    Vos Savant and the media furor
    Confusion and criticism
    Sources of confusion
    Criticism of the simple solutions
    Solutions using conditional probability and other solutions
    Refining the simple solution
    Conditional probability by direct calculation
    Bayes’ theorem
    Strategic dominance solution
    Solutions by simulation
    Variants
    Other host behaviors
    N doors
    Quantum version
    History
    See also
    Similar puzzles in probability and decision theory
    References
    Bibliography
    Further reading
    External links
Call Growth

Natural growth is the result of random behaviors encouraged in beneficial directions.

  • Mechanisms of cell growth control
    Cell growth regulation in animals
    Cell populations
    Cell size
    Yeast cell size regulation
    Linking Cdr2 to Wee1
    Cell polarity factors
    Other experimental systems for the study of cell size Regulation
    Cell division
    Comparison of the three types of cell division
    Sexual reproduction
    Disorders
    Measurement methods
    See also
    References
    Books
    External links

Confidence – pretty sure

1.

a. A belief or conviction that an outcome will be favorable

b. Belief in the certainty of something

2. Belief in the effectiveness of one’s own abilities or in one’s favorable acceptance by others; self-confidence.

3. Trust or faith in a person or thing:

4. A trusting relationship: 

5.

  1. That which is confided; a secret
  2. A feeling of assurance that a confidant will keep a secret
  • History
    Theories and correlations with other variables and factors
    Self-confidence as an intra-psychological variable
    Relationship to social influences
    Variation in different categorical groups
    Children
    Students
    Men versus women
    Stereotype threat
    Self-confidence in different cultures
    Athletes
    Measures
    Wheel of Wellness
    Implicit vs. explicit
    Extent
    Lack of self-confidence
    Confidence bias
    See also
    References
Self Control
  • As a virtue
    Research
    Counteractive
    Satiation
    Construal levels
    Human and non-human
    Alternatives
    Skinner’s survey of techniques
    Physical restraint and physical aid
    Changing the stimulus
    Depriving and satiating
    Manipulating emotional conditions
    Using aversive stimulation
    Drugs
    Operant conditioning
    Punishment
    “Doing something else”
    Brain regions involved
    Prefrontal cortex
    Outcomes as determining whether a choice is made
    Physiology of behavior
    “The Marshmallow Test”
    Ego depletion
    See also
    References
    Further reading
    External links
Empathy
  • Etymology
    Definitions
    General
    Distinctions between empathy and related concepts
    Classification
    Development
    Evolution across species
    Ontogenetic development
    Individual differences
    Sex differences
    Environmental influences
    Empathic anger and distress
    Anger
    Distress
    Influence on helping behavior
    Genetics
    General
    Neuroscientific basis of empathy
    Impairment
    Autism
    Psychopathy
    Other conditions
    In educational contexts
    In intercultural contexts
    Practical issues
    Benefits of empathizing
    Empathic inaccuracy
    Problems created by too much empathy and empathic bias
    Empathy and power
    Empathic distress fatigue
    Disciplinary approaches
    Philosophy
    Ethics
    Phenomenology
    History
    Psychotherapy
    Business and management
    Evolution of cooperation
    Measurement
    Self-report measures
    International comparison of country-wide empathy
    Other animals and empathy between species
    In fiction
    See also
    References
    Further reading
    External links
Privilege
  • Extent
    Access to the Sovereign
    Scandalum magnatum
    Trial by peers
    Freedom from arrest
    Privilege myths
    See also
    Notes and references
    Further reading
Opportunist
  • Definitions
    Etymology
    Moral connotations
    Human behaviour
    Eight main contexts
    Five main organizational influences
    Use of the term in specific areas
    Professional
    Intellectual
    Sexual
    Evolutionary
    Biological
    Political
    Economic
    Game theory
    Social
    Marxist theory
    Legal
    Spiritual
    See also
    References

Certainty/Un – willing to bet – with little doubt – measure of degree – percentage

1. The condition of being uncertain; doubt.

2. Something uncertain: the uncertainties of modern life.

3. Statistics The estimated amount or percentage by which an estimated or calculated value may differ from the true value.

  • Concepts
    Uncertainty
    Uncertainty versus variability
    Knightian uncertainty
    Unreferenced original research
    In measurements
    In the media
    Applications
    Philosophy
    Artificial intelligence
    See also
    References
    Further reading
    External links

Gambling– risking something to achieve gain

1. A bet, wager, or other gambling venture.

2. An act or undertaking of uncertain outcome; a risk: I took a gamble that stock prices would rise.

  • History
    Great Britain
    United States
    Regulation
    Insurance
    Asset recovery
    Religious views
    Buddhism
    Hinduism
    Judaism
    Christianity
    Catholicism
    Protestantism
    Other Christian denominations
    Islam
    Bahá’í Faith
    Types
    Casino games
    Table games
    Electronic gambling
    Other gambling
    Non-casino games
    Fixed-odds betting
    Parimutuel betting
    Sports betting
    Virtual sports
    Arbitrage betting
    Other types of betting
    Staking systems
    Other uses of the term
    Negative consequences
    Psychological biases
    See also
    References
    Further reading
    External links
Lottery

More like a equal-percentage wager

the best bet for a poor gambler?

  • Classical history
    Medieval history
    Early modern history
    France, 1539–1789
    England, 1566–1826
    British North America 1612–1783
    German-speaking countries
    Spain, 1763
    Ticket gallery
    Mathematical analysis
    Probability of winning
    Scams and frauds
    Payment of prizes
    Outcomes for big winners
    See also
    References
    Further reading
    External links

Magic– remember that you are constantly being fooled

1.

a. The art or practice of using charms, spells, or rituals to attempt to produce supernatural effects or control events in nature.

b. The charms, spells, and rituals so used.

2. The exercise of sleight of hand or conjuring, as in making something seem to disappear, for entertainment.

3. A mysterious quality of enchantment

  • Etymology
    White, gray and black
    High and low
    History
    Witchcraft
  •   Magicians
    See also
    References

    Further reading
    External links

SuccessHappy Ending Performance – Satisfied wish

1. The achievement of something desired, planned, or attempted: attributed their success in business to hard work.

2.

a. The gaining of fame or prosperity: an artist spoiled by success.

b. The extent of such gain.

3. One that is successful: The plan was a success.

  • In American culture
    In biology
    In education
    In business and leadership
    In philosophy of science
    In probability
    See also
    References
    Sources
    Further reading
Loss Function
  • Examples
    Regret
    Quadratic loss function
    0-1 loss function
    Constructing loss and objective functions
    Expected loss
    Statistics
    Frequentist expected loss
    Bayesian expected loss
    Examples in statistics
    Economic choice under uncertainty
    Decision rules
    Selecting a loss function
    See also
    References
    Further reading
Victory
  • The universal sign for victory
    Religion
    See also
    References
    External links
Survival Analysis
  • Introduction to survival analysis
    Definitions of common terms in survival analysis
    Example: Acute myelogenous leukemia survival data
    Kaplan–Meier plot for the aml data
    Life table for the aml data
    Log-rank test: Testing for differences in survival in the aml data
    Cox proportional hazards (PH) regression analysis
    Example: Cox proportional hazards regression analysis for melanoma
    Cox model using a covariate in the melanoma data
    Extensions to Cox models
    Tree-structured survival models
    Example survival tree analysis
    Survival random forests
    Deep Learning survival models
    General formulation
    Survival function
    Lifetime distribution function and event density
    Hazard function and cumulative hazard function
    Quantities derived from the survival distribution
    Censoring
    Fitting parameters to data
    Non-parametric estimation
    Computer software for survival analysis
    Distributions used in survival analysis
    Applications
    See also
    References
    Further reading
    External links
Performance Indicator
  • Categorization of indicators
    Points of measurement
    Identifying indicators
    Examples
    Accounts
    Marketing and sales
    Manufacturing
    Professional services
    System operations
    Project execution
    Supply chain management
    Government
    Further performance indicators
    Problems
    See also
    References
    Further reading
Infinity & Beyond
  • History
    Early Greek
    Zeno: Achilles and the tortoise
    Early Indian
    17th century
    Mathematics
    Symbol
    Calculus
    Real analysis
    Complex analysis
    Nonstandard analysis
    Set theory
    Cardinality of the continuum
    Geometry
    Infinite dimension
    Fractals
    Mathematics without infinity
    Physics
    Cosmology
    Logic
    Computing
    Arts, games, and cognitive sciences
    See also
    References
    Bibliography
    Sources
    External links

A Solar System Gravity Map

Leave a comment

I was impressed when ‘SpoonUnit’ (of thingiverse) presented the planets along a dimension other than the distance from the sun (using diameter instead).  I wondered if other dimensions might show the planets from interesting perspectives.  The dimension of mass and its resulting influence upon us, by way of gravity, seems promising.  Gravity has more influence upon us than many other metrics.

The source of the gravity information is:

https://en.wikipedia.org/wiki/List_of_gravitationally_rounded_objects_of_the_Solar_System

https://en.wikipedia.org/wiki/Gravity

I approximated and represent it all in a unique 3 dimensional map, using TinkerCad … 

https://www.tinkercad.com

The numbers represent the relative gravity – referenced to Earth, of gravity 1.

I 3d printed it … and made it ready for Thingiverse. You are invited to download the object for 3d printing, goto Thingiverse and search for GeeEaZy.

https://www.thingiverse.com

It can be viewed as ‘looking down a well’ at a clockwise spiral of equally sized objects.

Looking for the home feeling?” – go to Neptune (1.1), Saturn (1.1), Venus (.9) or Uranus (.9)

For the gravity that fits your muscles.

Looking for a familiar and easy life?” – go to Mars (.4), Mercury (.4), Io (.2) or Luna (.2)

For existence that we have some experience with, and that requires little effort.

See Jupiter as the major mover.

https://en.wikipedia.org/wiki/Gravity_assist

The other solar system object approximate zero-g environments – that we have experience with.

https://en.wikipedia.org/wiki/Weightlessness

Ponder the 5 planets of near equal mass – this may be a clue to something.

Remember how important, for chemistry, the electron grouping was …

1 5 2 2 3 is that like the electron energy levels of 2 6 10 14 18 ( s p d f g)?

https://en.wikipedia.org/wiki/Electron_shell

Aside: note that s = 2, p-s = 4, p+s = 8, p+d = 16 (also f+s), f+d+p+s = 32, …

looks like a binary influence may be at work here … ?

The grouping could also be 1 2 3 2 2 3 … or …

Happy pondering!

Numbers for Geometry

Leave a comment

My solution to a nagging problem in Mathematics

I became intrigued by the variety of ways to establish a coordinate system for representing solid geometry in numeric terms.

https://en.wikipedia.org/wiki/Cartesian_coordinate_system

I remember, as a young person, being amazed at the difficulty of transitioning from plane to solid geometry.  Being comfortable with x-y graphs, I felt that it would be easy to add a z axis and be done with it.  I soon found that confusion prevailed.  Teachers at the blackboard oriented the axes so that y was up and z came out into the room.  My intuitive choice was to have z going up out of the paper at my desk.  I found agreement with my system when describing ‘ground vehicles’, where x is to the front and z is up.  But when working with aircraft I found that x was the same as for ground vehicles but that z pointed down.  And for water vehicles, where z is meaningless (except for submarines), the aircraft convention was adopted.  Space is a different story all together, where up is not defined.

https://en.wikipedia.org/wiki/Aircraft_principal_axes

https://en.wikipedia.org/wiki/Axes_conventions

Mathematics does not settle the confusion, when it accepts any orientation as equivalent, as long as the ‘right hand rule’ convention is followed.

https://en.wikipedia.org/wiki/Right-hand_rule

Most people could care less.  With the introduction of 3d printers and design tools readily available, more people are confused.  Which way is ‘up’?  One of my favorite books from childhood was ‘Flatland’ by Edwin Abbott where a character struggled with the suggestion of ‘up without north’ and concluded that the concept was silly.

https://en.wikipedia.org/wiki/Flatland

I have concluded that this confusion is silly in a system so important, for describing the world we live in.  The solution I chose was to relate the axes to the only meaningful perspective – the individual’s senses.  The x axis points in the direction of visual interest and mobility preference.  X points ahead to the front.  Z is another intuitive sense and must be pointing up.  This requires that Y must be pointing to the Left; to follow the right hand rule and the mathematical definition of orthogonal axes.  We are after all, a ground vehicle.

So to aid in remembering this system, and it’s associated conventions for roll, pitch and yaw; I  have created a structure using my hero Albert Einstein as the model. 

https://en.wikipedia.org/wiki/Albert_Einstein

The x-axis comes out of his head in the region of the eyes, providing for the ROLLing of eyes.

The z-axis comes out of the top of his head, providing for the YAWing of disagreement.  And the y-axis comes out of the left side of his head, providing for the PITCHing of agreement.

I decided to further enhance my Albert Einstein model by allowing us to view into his thoughts; by not completing the construction of the top of his head.  Looking into his head we get a glimpse of his thought processes. We find 4 major areas of concentrated innovations.

Area 1 relates to the ‘Photoelectric Effect’ and is represented by a matrix of similar objects.

I’ll leave it to you, to relate Quantum to the observed impact of light dislodging an electron only if the frequency is appropriate.  He showed that the intensity and duration of the light had no effect upon releasing electrons; only the frequency.  Concluding from that an idea of discrete levels and units.  And that light can influence matter.

https://en.wikipedia.org/wiki/Photoelectric_effect

Area 2 relates to ‘Browning Motion’ and is represented by a rotated normal distribution curve, compared to the Apollo spaceship.  I’ll leave it to you, to relate Statistics to the observed motion of a pollen seed being moved randomly about by impacts with water molecules.

https://en.wikipedia.org/wiki/Brownian_motion

Area 3 relates to the relationship between Time and Space and is represented by the light cones of past and future interacting with the present space plane.  I’ll leave it to you, how the ‘special theory of relativity’ can unify time and space.

https://en.wikipedia.org/wiki/Special_relativity

Area 4 relates to the relationship between Mass and Energy and is represented by the famous equation that almost everyone knows.  I’ll leave it to you, to ponder the enormous amount of energy released by the conversion of even a tiny amount of mass.  Take one of the largest numbers you can think of, the speed of light (very fast), multiply it by itself, then multiply that by the mass converted and you get a really humungous number for the energy released.  So much so, that over many decades of failed attempts to contain it, we still struggle with controlling it.

https://en.wikipedia.org/wiki/Mass–energy_equivalence

My completed model includes an idea of the frustration of living in the shadow of greatness.  We see a smaller pale imitation of the colorful Einstein model standing off to the side, admiring with awe the accomplishments of a hero. 

Yet we know that the effort to assign numbers to physical objects will continue to broaden our understanding and appreciation of nature.

Click on the image to the right for a video of my model.

Compliant Material

Leave a comment

Justification for buying a 3D printer …

or How much I enjoy Making Compliant Materials with my Snapmaker 3D printer+

I am excited about this field of Mechanical Engineering and the Computer Aided Design and Manufacturing currently available to the public.  Working with compliant material is much easier than manufacturing using non-compliant metal.  The 3D printer technology is a perfect companion for this material. 

This post is a collection of links and descriptions of compliant mechanisms, many of which I printed on my Snapmaker 3D printer.

The following information MUST be somewhat understood before the wonder and beauty of this material can be fully appreciated.  The lecture series is optional but is very informative and well presented.  Be careful, the advertisements can be enticing.  This is just a sample of the instructional material available to you.

Visit to BYU lab … Derek Muller and Larry Howell Video

Definition:

https://en.wikipedia.org/wiki/Compliant_mechanism

Derek Muller Video

https://www.veritasium.com/videos/2021/2/18/why-robots-that-bend-are-better

Derek Muller Video

https://www.veritasium.com/videos/2019/10/4/engineering-with-origami

Larry Howell Video

https://www.compliantmechanisms.byu.edu

https://www.compliantmechanisms.byu.edu/maker-resources

Robert J Lang Video

https://langorigami.com

Lecture Series … Jonathan Hopkins

FACTS of Mechanical Engineering Lecture Series … Jonathan Hopkins

https://www.youtube.com/channel/UC5Jz6SBlu2Sv61kfssv4DOw

Cool Prints

Just Noticeable Difference

Leave a comment

Psychophysics has discovered a law called the Weber-Fechner Law which can demonstrate the different perceptions of the rich and the poor.  It proves what we already knew: that a little stimulus change means a great deal when you have almost nothing, and very little when you have an excess.  A single dollar does not create the same emotional appreciation to a wealthy person as to a poor person.

Imagine you are in a closed room with a single lightbulb and a controller that can make small changes in the light intensity.  If the light is initially very dim and you increase or decrease the intensity just enough for you to notice a change from the original brightness, that change in intensity is called the Just Noticeable Difference (JND).  If you then make the initial light very bright and do the same slight adjustments until you notice the change, you then have another JND.  The value of the two JNDs is not the same.  The JND when starting from a large intensity is greater than if you start from a small intensity. Larger changes are required to make a meaningful difference in bright light.  If we have a small sensory base level (dim light), it takes less of an increase or decrease for us to notice a change.  This is true for a large number of our sensory modalities; seeing, hearing, touching, temperature, etc.  This is one of the few laws of psychology.  It was discovered by Ernst Weber and developed into a law by Gustav Fechner in 1860.  Fechner is known as the father of Experimental Psychology.  He thought that he had solved the infamous mind-body problem, expressing it in a mathematical formula (S=K ln I; where S is sensation {mind}, I is stimulus Intensity {body}, ln represents the natural logarithm, and K is a constant).  This formula is inscribed on his tombstone.

With money and with government influence of the economy, how do we increase population well-being, while maximizing the utility of resources? How do we keep the profit motive alive while providing aid to the needy?  This law may provide a partial answer to these questions.

This law also applies to wealth.  A poor person will notice a small increase or decrease in wealth, while it takes a much larger change to make a similar difference to a wealthy person.  Increases in wealth are experienced as positive emotions while losses are depressing.  If a government has a limited resource and wishes to improve the well-being of a population, this law would instruct them to give to poor people.  Thereby creating more positive emotions per unit of resource.  The resource can be money, food, shelter, security or anything which will be perceived with emotion.  Trickle-down manipulations are second order inefficient methods of increasing population positive emotions.  Economic stimulation to improve the population’s well-being is best achieved by aiding the poor.  If you wish to efficiently use a limited resource, and avoid spoilage or misuse, you should provide it to those who will appreciate it the most.  Thus the cost of a single apple, for example, should be less than the per unit price of a large quantity purchase.  A profit may be realized if the large purchase (wholesale) at a higher per unit price is followed by a ‘value added’ process.  In the example of apples the ‘value added’ process could be baking an apple pie, of preserving the apples for storage.  This is different from the economic model of producing a profit by buying in large quantity and selling in small units.  In the JND model the large quantity purchase of a commodity must be accompanied with a ‘value added’ process to produce a profit.

A mature Psychology science is expected to use scientific discoveries to aid people in psychological distress.  An ethical educational system is expected to encourage and perpetuate ethical behaviors.  A viable spiritual leadership is expected to guide people into virtuous directions.

Many discoveries in Psychology have been applied in ways to effectively decrease population well-being.  Discoveries in addiction behavior have been used to increase irrational desires for products and services.  Discoveries in attractiveness perception have been used in advertising to increase sales and encourage discontent and reduced self appreciation.  Discoveries in greed have been used to encourage insatiable consumption and producing more greed.  Discoveries in aggressive behavior and hostility have been used to support competition and bully wars.  JND discoveries are being used to slightly decrease the size of a product’s container, thereby increasing the profit.  We are educated to accept these manipulation as ‘smart business’ and to see it as a commendable application of science and understanding.  Our economic and ethical principals reward this inhumane behavior.  Perhaps an educational change is warranted as well as a maturing of the science of Psychology.  More needs to be said of the responsibilities of our spiritual leadership.

We must anticipate that a growing wealth gap and the movement of large groups of people from middle class to lower class will produce an unhealthy response from the disenfranchised.  The assumption of equal emotional appreciation to an equal quantity money is not fair to the poor.  A given quantity of money means much more to a poor person.  Consider what it would be like if the poor were to behave as if equal emotional exchanges were the norm.  A poor person would be willing to risk almost everything to gain the necessities of life.  This does not produce a healthy society.

A government must keep in mind that in gambling (which may be hidden, but present in most human activities) it is possible for a wealthy person to ‘buy the pot’ with a bet well beyond the JND of a poor person, while meaning much less to them.  Honest betting may be closer to ‘percentage of wealth’ wagers.  This may be the motivation behind the wealthy doing all they can to discourage the ‘bet it all’ mentality, where the poor attempt to create a level playing field through insurrection.  

There is a new branch of the literature on public finance hypothesizing that the Weber–Fechner law can explain the increasing levels of public expenditures in mature democracies. Election after election, voters demand more public goods to be effectively impressed; therefore, politicians try to increase the magnitude of this “signal” of competence (the size and composition of public expenditures) in order to collect more votes.

And the lightbulb comes on!

Boolean Algebra & Multiple Negatives

Leave a comment

Tools of the Trade:  Engineering design Items contained in the following image are …

The “TTL Cookbook”, published in 1974, became the standard reference to TTL (transistor-transistor-logic) devices.

A “Drawing Template”, of standard logic symbols, used for schematic drawing of logic designs.

“Protection Tubes”, containing the standard packaging (chips) of integrated circuits.  Depending upon how you interconnect these logic blocks, you can build a computer, a calculator, an electronic music system, a digital voltmeter, a counter, a television terminal readout display, a color-tv dot-bar generator, educational demonstrators, or any of thousands of other possibilities.  While a single “yes-no” decision by itself usually is not too useful, the proper combination of grouped “yes-no” decisions taken together can represent a number, a word, a command, a musical note, a test signal, or practically anything else you might like.

LogicGates

Introduction to Boolean Logic:

The Khan Academy has several lessons providing an excellent introduction:

Logic Gates

Boolean Logic

Wikipedia provides very useful reference:

Boolean Algebra

And many Educational Institutions sources:

Princeton

WordPress

Medium

Computer Aided Design applications which are “Open Source” (Free and Modifiable):

TinkerCAD

Blender

The following images are from the “TTL Cookbook” of selected TTL circuits.  Each covers essential information you might need to connect or intelligently use the integrated circuit.  The selected operations are those commonly found in Computer Aided Design (CAD) tools for creating solid objects. See: Constructive Solid Design for details.

The AND logical operation, producing what is referred to as an Intersection or a Conjunction and commonly represented by the ampersand symbol (&):

And

The OR logical operation, producing what is referred to as a Union or a Disjunction and commonly represented by the symbol (v):  

Or

The NOT logical operation, producing what is referred to as a Difference or a Negation or a Inverter and commonly represented by the symbol (- or \):

Inverter

The XOR logical operation, producing what is referred to as an Exclusive OR:

ExclusiveOr

The NAND logical operation, producing what is referred to as a Not AND:

NAnd

The NOR logical operation, producing what is referred to as a Not OR:

NOr

Boolean Logic use in Everyday activities:

How to get the truth from a chronic lier?  Ask them “What would you say, if I ask you …?”.

In some languages, double negatives cancel one another and produce an affirmative.  Languages without negative concord typically have negative polarity items.  Examples: ”ever”, “anything” and “anyone” in the sentence “I haven’t ever owed anything to anyone” (cf. “I haven’t never owed nothing to no one” in negative-concord dialects of English, and “Nunca devi nada a ninguém” in Portuguese, lit. “Never have I owed nothing to no one”, or “Non ho mai dovuto nulla a nessuno” in Italian). Note that negative polarity can be triggered not only by direct negatives such as “not” or “never”, but also by words such as “doubt” or “hardly” (“I doubt he has ever owed anything to anyone” or “He has hardly ever owed anything to anyone”). In English, double negatives can sometimes be used for affirmation (e.g. “I’m not feeling not good”), an understatement of the positive (“I’m feeling good”). There isn’t no other way.= There’s some other way. Negative: isn’t (is not), no vs.  There isn’t no other way!= There’s some other way!

“I don’t disagree” could mean “I certainly agree”, “I agree”, “I sort of agree”, “I don’t understand your point of view (POV)”, “I have no opinion”.

Whereas some double negatives may resolve to a positive, in some dialects others resolve to intensify the negative clause within a sentence. For example:

  • I didn’t go nowhere today.
  • I’m not hungry no more.
  • You don’t know nothing.
  • There was never no more laziness at work than before.

In contrast, some double negatives become positives:

  • I didn’t not go to the park today.
  • We can’t not go to sleep!

I don’t want to know no more

I’m not doing nothing; I’m thinking.

I did not go nowhere resolves to I went somewhere.

“If you don’t wanna go nowhere…”

“Badges? We ain’t got no badges. We don’t need no badges!.”

“I won’t not use no double negatives”

“I ain’t never heard of no licence.”

“Inside toilet? I ain’t never not heard of one of them nor I ain’t nor nothing.”

“We don’t need no education / We don’t need no thought control”

(I Can’t Get No) Satisfaction“)

Other languages, doubled negatives intensify the negation – negative concord or emphatic negation

Portuguese, Persian, French, Russian, Spanish, Neapolitan, Italian, Bulgarian, Czech, Polish, Afrikaans, Hebrew, Ukrainian, and some dialects of English, such as African-American Vernacular English, are examples of negative-concord languages

negative-concord languages are more common than those without.

And what about 2 Positives … can they make a Negative? ,,, ?

A English professor was giving a lecture on ‘Double Negatives’ in his class and said, “It is interesting that two negatives make a positive but there are no two positive words that can make a negative in the English language.”

‘Yeah, right.’ A student exclaimed from the back of the class.

Negatives

Puzzles:

Maker Tools

1 Comment

I recently attended a Rapid Prototyping class at our local Vocademy.  Merle, my wife, gave me the class for Hanukkah/Christmas.  The class introduced me to 3D printing, Laser Engraving, Laser Cutting, CNC machinery and much more.  I learned that “If you can’t open it, you don’t own it!”, among much more.  It was a wonderful experience which encouraged me to buy a 3 in 1 prototyping tool.  I bought the version 1 of SNAPMaker.

SnapMaker

I have enjoyed exploring the 3D printer to make elevation maps of mountainous areas, using free web sites TouchTerrain and Terrain2STL.  Here are 3 that I did of my brother’s home area in Mendocino County California.  I used 3 different scalings of elevation to test psychological realism.  I liked the times 2 and times 3 exaggerations best (actually I settled on 1.628; the Golden Ratio):

BellSprings

Other explorations include an artistic hummingbird (Broken Wing & Crooked Snout).  This was my first experience with 3D Printer Crutches:

HummingBird

And several Etchings, Engravings and a wonderful exotic gear arrangement:

Products

Here are the Cubic Gears in action:

 

Here is a wonderful Spiral Gear arrangement from Thingiverse (a source of Open Designs)

SpiralGears

And I’m using the Laser Cutting capability to make Boxes (thanks to ‘MakerCase’ another Open source for Designs):

GLBox&MapRiversideBox&Map

And even more …

Globe

Escher

Etchings

LakeOutline

M.C. Escher Tiles

MakerEscher

Who knows what lies in the future?  I’ve barely touched the CNC Carving (Router) capabilities of my SnapMaker.

Here are some lessons I learned at the Prototype class and where to find Information.

Lessons Learned:

Be aware that there exists a thriving community of help and free stuff out there.  I think that you should initially use only free software and designs, then later select your professional package.

Consider the NATIVE language and Computer platform compatiblity of a software package and the system/computer (Apple or PC or …) you wish to use.  By NATIVE I mean, the platform upon which the software was initially developed.  Ported software is rarely complete and error free.  By PORTED I mean,  written in one language, then translated to another.

Consider the hidden got-ya’s of the commercial software world.  AutoDesk FUSION 360 will give generous TRIAL use; to students and hobbyists, BUT they encourage storage of your designs on THEIR cloud.  You may have to buy a years subscription (~$400) to have access to your designs. UNLESS you were able to save to local media.

Consider the file types that a software package will accept as input and output.  Consider your total package of software/hardware tools and the formats accepted throughout.

Be aware that there a 2 graphic representation types in current use – Vector and Raster.  The difference can be significant when representing an image.  JPG, BMP, GIF, PNG are some Raster formats, while SVG and DXF are Vector formats.

Be aware that while g-code is the standard format for CNC and 3D-Printer applications, not all machines will accept standard g-code – remember the old IBM language enhancements hype (PL/1 was their universal language).  A little aside; I worked for an outfit that bought into IBM’s pitch to use only PL/1, later to find that all their custom software applications would only run on IBM machines. And that it was impossible to get a competitive bid when wanting to upgrade to newer hardware (Married to IBM and PL/1 until someone (me) translated their software into Cobol and Fortran).

Be aware that access to home CNC devices is very new and is providing great opportunity for innovative uses and applications.  Keep an eye out for imaginative and exciting news and events – like MAKER Faires, MAKER books and publications.

Computer-Aided Design Tools:   Information is:  HERE and HERE

AutoDesk FUSION 360       Very Detailed and Complete – More than Needed

TinkerCAD                           Very Easy to Use and surprisingly Complete

InkScape                              Free and quite Adequate

OpenSCAD                           Free and Interesting and Different approach (programming)

Blender                                Free and Very Detailed and Complete – More than Needed

Mathematica                      Free and Interesting

SketchUp                             Free and Interesting

CADMapper                        Free and Interesting

Photo Editors:     Information is: HERE and HERE

InkScape                             Free and quite Adequate

Retina Engrave                  Interesting

Slicers:                  Information is:  HERE

IdeaMaker                           Interesting, with Repair Capabilities

Formats:                Information is:  HERE

STL                                       Stereolithography – 3D Standard 

SVG                                       Vector Graphics Standard

JPEG                                     Photo Standard – Raster Graphics

G-Code                                 CNC Standard (somewhat)

Sources:                Information is:  HERE

Terrain2STL                       Good Source of Map Data

TouchTerrain                     Better Source of Map Data

Thingiverse                        Great Designs

MakerCase                          Boxes made Easy

Filaments:           Information is:HERE

PLA                                       Biodegradable

CNC Devices:        Information is: HERE and HERE

Snapmaker                         Toutorials HERE

The approach I took to using my SnapMaker is as follows:

I am working on an Apple computer, the original SnapMaker, using only free software packages, and doing very basic operations.  What follows is a record of my discoveries.  These tools and capabilities may no longer exist, and more efficient ones are being offered regularly.  Please use the following as a guide only.

FORMATS

CNC formats

I started by identifying the FORMATS of the files required to communicate the information. I only considered 4 file formats (STL (stereolithography), SVG (Scalable Vector Graphics), JPEG (Joint Photographic Experts Group), and G-Code (computer numerical control programming language)).  The rest can wait for later.

3D printing requires an import of a STL format file.  STL is the standard format for communicating 3 dimensional information.  The engraving and carving requires an import from a JPEG or SVG format file.  JPEG is a common 2 dimensional photo (raster) format.  SVG is a common 2 dimensional image (vector) format.  The remaining format of interest is G-CODE which is the standard CNC (computer numerical control) format.  These are the only formats that I concerned myself with initially, there are many many more!

SOURCES …

Next, I concerned myself with where I would find sources of designs in the appropriate formats.  I will eventually create my own designs using CAD (computer-aided design) software, but I needed to see results now.

For 3D printing, a perfect starting source is THINGIVERSE.  Makers donate finished designs to this free website.  The designs are presented in STL format, so all I needed to do was download the files and then import them into SnapMaker.  I had a desire to make 3-dimensional maps of mountainous terrain.  I found TERRAIN2STL and TOUCHTERRAIN, two free websites that provide STL format files which import directly into the SnapMaker. 

I also wanted to make boxes using the SnapMaker laser cutting capabilities.  I found MAKERCASE another free website which produces SVG format files which import directly into the SnapMaker.  However a wrinkle developed when the box design assumed a Laser Cutting bed larger than the 125 mm by 125 mm bed of the SnapMaker.  More about that later.  

For laser engraving I found that PHOTOS (an Apple utility) would produce JPEG files from imports from my camera, my phone, the computer screen capture (CMD+SHIFT+3), Google images, and any other photo site.  Usually the editing required was accomplished in PHOTOS.  More about manipulations later.

SOFTWARE  …

I next searched for free software packages that would enable me to design products and export file formats compatible with SnapMaker.  The following diagram identifies the software packages which I found and used: 

MakerFormats

Rarely will you find a single package to make all the manipulations you desire.  The following flow chart identifies the steps and tools used to accomplish my 3D terrain mapping design:  

MakerFlow

I obtained the basic 3D map from TOUCHTERRAIN.  I next wanted to remove the water from the lake so that I could fill in depth information.  This activity required a lake outline, which I obtained using a Google Map image captured in JPEG format.  I imported the Google map into INKSCAPE where using the Bezier curves tool I obtained an outline of the lake.  The outline (in SVG format) and the Terrain map (in STL format) were imported into TINKERCAD or BLENDER where a Boolean difference was performed to accomplish the removal of the lake.  The resulting STL file was imported int SnapMaker for printing.  The outline (in SVG format) was converted to STL using TINKERCAD or BLENDER for editing to add water depth information.

The problem of restricted bed size mentioned above, was solved by ungrouping the components of the box (top, side, bottom) into separate SVG files using INKSCAPE.  

YOU

Your imagination and creativity will guide you to the appropriate, efficient and safe tool as experience grows.

 

Magic Cards, Explained

Leave a comment

I found the Binary Notation connection to this card trick very interesting.  I was struck by the educational possibilities for people who have had an introduction to Number Theory, but needing a practical application to make sense of it.  Young people of the age of ‘formal thought’ are attracted to magic and the challenge of ‘how it is done’ understanding. I put together this brief explanation of the math involved in creating this card trick.

To see the trick and my explanation click here:

I wanted to use my new laser cutter, so I made boxes to serve the same function as the cards above:

Each box contains the appropriate number of beads (0,1,2,4,8,16, or 32). The subject chooses a secret number, then empties the box(s) containing that number on its face, counts the beads, and amazingly they add to the secret number!

Here is a little aside video of an analog computer doing arithmetic in Binary …

So you want more…..

THE MAGIC CUBE TRICK:

To amaze your friends:

State that you will discover the month of their birth, as well as the day.

Show the Magic Cube, and ask if the number of their birth month (1 – 12) is on any of the 4 side panels.

If they report “YES” (selected), you add all the numbers in the upper left corners of each selected panel to get a total count.

Continue for the other 3 side panels.

The sum of numbers in the upper left corner of the YES panels, is the number of their birth month.

Next; 

Reset the total count to Zero.

Repeat the above process for the day value (1 – 31).  This time using the 4 side panels AND the bottom panel.

The sum of numbers in the upper left corner of the “YES” (selected) panels, is the number of their birth day.

How does it work?  

Click on the “Magic Cards” link above, which describes the same trick, in a playing card format.

To make your own “Magic Cube”:

Goto Thingiverse.com, search for GeeEaZy, select “Magic Cube” and download the “STL” file; permitting you to 3D print your own Magic Cube.  You may want to scale for readability

I think that this small piece of code has vast applications, requiring only minor modifications to make it suitable for many uses:

Think about a McDonalds counter; where virtual tiles with pictures of French fries, soda, …; pressing initiates price calculation applications, plus cooking room order applications, plus inventory control applications, plus … all made possible because little training is required to obtain accurate ordering information,

Even another variation … :

The following is a “work in progress” … – … my effort is complete and correct – while WordPress needs to remove its JavaScript content filtering / sanitization system (KSES) to enable my code. oh well! I am very disappointed that this version didn’t work – it was to be my grand finally.

You can Identify the Tiles containing the NUMBER of your Birth MONTH by CLICKING on the TILE(s):

// data about data – special characters Image Value Selector
    These are the valid Month Numbers
  1. January
  2. February
  3. March
  4. April
  5. May
  6. June
  7. July
  8. August
  9. September
  10. October
  11. November
  12. December
… This is your Birth Month: …

Now; you can Identify the Tiles containing your Birth DAY by CLICKING on the TILE(s):

… This is your Birth Day: …

Now; … I have magically (using binary math knowledge) determine your Birth Month: …

and your Birth Day: …

THANK YOU

And now for one LAST variation …

This version benefits from the fact that all the magic is done by the tiles, with help from the participant. No involvement from the performer. The participant selects the tiles containing their secret number – turns them over and adds the numbers on the back sides – and magically they add to the secret number!

Aside: the best card trick I’ve seen in a long time …

and Thanks to:

“profhankd” for the hinged box design, and to 

“Zagatorz” for guiding me to a cube presentation of an old card trick.

and to the wonderful binary counting device: “Turing Tumbler”

and to the unnamed inventor of the Magic Card trick available at most magic stores

SO … HERE’S THE DEAL …

I’m tired of working on this.

The problem is: That the last incarnation of this trick, required actual coding beyond my comfort zone

I never wanted to code in HTML/CSS/JavaScript on the WordPress platform in the first place!

So … everything is pretty much working … except …

In edit mode, using preview, += works fine … while after publishing it is treated as a =, and gives only the add-on value.

Like in:  sum += add-on; OR sum = sum + add-on; OR old = sum;    sum = old + add-on;

And … there are scripting problems, that I would learn to live with.

So … here’s the code – put it into your WordPress creation – solve my problems and clean up my mess,

AND

turns out that WordPress does not allow the kind of code that I am trying to use

so … until I find a new way to do things … this is it. Enjoy what I have!

HERE’S the HTML/CSS/JS code:

<!DOCTYPE html>

<!–       HTML code.  —

<html lang=”en”>

<body>

  <meta charset=”UTF-8″>  // data about data – special characters 

  <title>Image Value Selector</title>

<!– Ordered List –>

<ol id=”numberList”>

These are the valid Month Numbers

  <li>January</li>

  <li>February</li>

  <li>March</li>

  <li>April</li>

  <li>May</li>

  <li>June</li>

  <li>July</li>

  <li>August</li>

  <li>September</li>

  <li>October</li>

  <li>November</li>

  <li>December</li>

</ol>

<div id=”image-container”>

<label class=”selectable-image”>

<input type=”checkbox” name=”selected-image”>

<img src=”https://jorgezimmerman.org/wp-content/uploads/2025/07/box8.jpeg&#8221; height=”50″ width=”50″ data-value=”8″ data-id=”4″ id=”4″ label:’8′ alt=”8″ value=”8″ class=”selectable-image”>

</label>

<label class=”selectable-image”>

<input type=”checkbox” name=”selected-image”>

<img src=”https://jorgezimmerman.org/wp-content/uploads/2025/07/box4.jpeg&#8221; height=”50″ width=”50″ data-value=”4″ data-id=”3″ id=”3″ label:’4′ alt=”4″ value=”4″ class=”selectable-image”>

</label>

<label class=”selectable-image”>

<input type=”checkbox” name=”selected-image”>

<img src=”https://jorgezimmerman.org/wp-content/uploads/2025/07/box2.jpeg&#8221; height=”50″ width=”50″ data-value=”2″ data-id=”2″ id=”2″ label:’2′ alt=”2″ value=”2″ class=”selectable-image”>

</label>

<label class=”selectable-image”>

<input type=”checkbox” name=”selected-image”>

<img src=”https://jorgezimmerman.org/wp-content/uploads/2025/07/box1.jpeg&#8221; height=”50″ width=”50″ data-value=”1″ data-id=”1″ id=”1″ label:’1′ alt=”1″ value=”1″ class=”selectable-image”>

</label>

</div>

<style>

    .image-container {  // CSS properties for image display 

      display: flex;    // flexible box layout

      gap: 20px;        // creates spaces between items 

      margin-bottom: 20px;

    }

    .image-box {                   // imbed image – but not insert into web page  

      border: 2px solid #FBEC5D;   // element border of solid style, 2px wide, light gray  

      padding: 10px;               // right border only 

      text-align: center;

    }

.image-selector {         // CSS class attribute  

display: inline-block;    // CSS property value combines characteristics of both inline and block elements   

margin: 5px;

cursor: pointer;          // visual appearance of the mouse cursor when hovering  

}

.image-selector img {     // descendant of element with class .image-selector  

border: 2px solid transparent;

width: 60px;

transition: border 0.2s;

}

.image-selector input[type=”checkbox”]:checked + img {

border: 4px solid #0073e6;  // blue 

}

<!–  #image-container {

    display: flex;

    gap: 20px;

  }

–>

  .selectable-image {

    width: 75px;

    height: auto;

    cursor: pointer;

    border: 3px solid transparent;

    transition: border-color 0.3s ease;

    user-select: none;

  }

  .selectable-image.selected {

    border-color: #0073aa; /* WordPress blue */

  }

</style>

<!–      JavaScript code      –>

<script>

  document.addEventListener(‘DOMContentLoaded’, () => {

    const images = document.querySelectorAll(‘.selectable-image’);

    var sumDisplay = document.getElementById(‘sum’);

    function updateSum() {

      var sum = 0;

      var old = 0;

      images.forEach(img => {

        if (img.classList.contains(‘selected’)) {      

                   old = sum;

                    sum = old + Number(img.getAttribute(‘data-value’));

                    itemNumber = sum – 1;

    // Get all list items

      var items = document.querySelectorAll(‘#numberList li’);

    // Get the text of the specified item

      var selectedText = items[itemNumber].textContent;

      document.getElementById(‘displayItem’).textContent = 

      “The Birth Month ” + (itemNumber + 1) + ” is: ” + selectedText;

        }

      });

    }

    images.forEach(img => {

      img.addEventListener(‘click’, () => {

        img.classList.toggle(‘selected’);

        updateSum();

      });

    });

  });

</script>

This is your Birth Month:

<!– Display Area –>

<p id=”displayItem”></p>

</body>

</html>

Next, you will find one of the first of many discussions that I had with ChatGPT.

I enjoyed our collaboration – it started a lot of lines-of-thought and ChatGPT explained it’s thinking very well. I look forward to working on many more projects with “Chat”.

CODE:

### **Overview**

You want to:

1. Display five pictures on your WordPress site.

2. Let users select any number of them (like checkboxes or clickable images).

3. When the user is done, perform different mathematical operations depending on which pictures were selected.

### **How to Approach This**

#### **1. Displaying Selectable Pictures**

You can use HTML and JavaScript for this. Each image can be clickable, toggling its “selected” state.

#### **2. Detecting Selections**

You’ll need JavaScript to keep track of which images are selected.

#### **3. Performing Calculations**

When the user submits (or clicks a button), your JavaScript code checks which images are selected and runs the appropriate math.

### **Example Code**

You can add this code to your WordPress site using a Custom HTML block (for the HTML) and a plugin like “Insert Headers and Footers” or your theme’s custom JS area (for the JavaScript).

#### **HTML (put in a Custom HTML block):**

“`html

<div id=”picture-selector”>

 <img src=”URL1.jpg” class=”selectable-pic” data-value=”1″ alt=”Pic 1″>

 <img src=”URL2.jpg” class=”selectable-pic” data-value=”2″ alt=”Pic 2″>

 <img src=”URL3.jpg” class=”selectable-pic” data-value=”3″ alt=”Pic 3″>

 <img src=”URL4.jpg” class=”selectable-pic” data-value=”4″ alt=”Pic 4″>

 <img src=”URL5.jpg” class=”selectable-pic” data-value=”5″ alt=”Pic 5″>

</div>

<button id=”calculate-btn”>Calculate</button>

<div id=”result”></div>

“`

*Replace `URL1.jpg` etc. with your actual image URLs.*

#### **CSS (optional, for highlighting selection):**

“`html

<style>

.selectable-pic {

 border: 2px solid transparent;

 width: 100px;

 margin: 5px;

 cursor: pointer;

 transition: border 0.2s;

}

.selectable-pic.selected {

 border: 2px solid #0073aa;

}

</style>

“`

#### **JavaScript (put in your custom JS area):**

“`javascript

document.addEventListener(‘DOMContentLoaded’, function() {

 const pics = document.querySelectorAll(‘.selectable-pic’);

 pics.forEach(pic => {

   pic.addEventListener(‘click’, function() {

     pic.classList.toggle(‘selected’);

   });

 });

 document.getElementById(‘calculate-btn’).addEventListener(‘click’, function() {

   const selected = Array.from(document.querySelectorAll(‘.selectable-pic.selected’));

   const values = selected.map(img => parseInt(img.getAttribute(‘data-value’)));

   // Example: Different operations based on selection

   let result;

   if (values.length === 0) {

     result = “Please select at least one picture.”;

   } else if (values.length === 1) {

     result = “Square: ” + (values[0] ** 2);

   } else if (values.length === 2) {

     result = “Sum: ” + (values[0] + values[1]);

   } else if (values.length === 3) {

     result = “Product: ” + (values[0] * values[1] * values[2]);

   } else {

     result = “Total: ” + values.reduce((a, b) => a + b, 0);

   }

   document.getElementById(‘result’).textContent = result;

 });

});

“`

### **How It Works**

– Users click images to select/deselect them (they get a border).

– When the button is clicked, the script checks which images are selected.

– It performs a different operation depending on how many are selected (you can customize the logic).

Alternative Energy

Leave a comment

Lately I’ve been thinking of other ways to satisfy my power needs.  At our home in the Mountains we are very close to being “off the grid”.  We have our own water supply, our own sewage system, we could burn wood exclusively for heat, but we are far from being self sufficient.

Electric power is critical for our lifestyle.  

We have a stream flowing through the property, which could supply some power following rain storms or during Spring run-off, but continuous supply of power from it is not possible.  We have limited Sun access for much of the day.  We have lake breeze many afternoons.  We have tremendous temperature differences during the Winter, between Air temperature and Lake temperature.  Geothermal is being used by neighbors via old wells and heat transfer systems.

The hope is that with sufficient energy storage (perhaps my electric car), using the sources available we might be able to be “off the grid”.

Now … how do I turn that hope into reality … hum … 

For many years people around Mendocino county, California lived remotely with plenty of resources to explore off-the-grid existence.  Much of their effort was recorded and directed by a magazine Home Power.  I look forward to searching through the archived issues in the local library.

I have obtained a WaterLilly and will explore it’s use next year.  I might even tow it behind the canoe to charge the cell phone (which probably would have no service bars).

We have installed a new fireplace insert which makes the living room very comfortable.  A additional feature is the Fireplace Fan which requires no external power and is driven by converting fireplace heat to electrical energy and using that to turn two fans.  Hopefully more than the living room will be comfortable.

The concept of Gravity Weight attracted me sufficiently to install a series of LED lamps powered by a falling weight.  We get about 20 minutes of light per pull.  You could get a lot more time if the weight were falling from a tree (and the chain were long enough, and …)

The Stirling Engine represents for me the most promising alternative energy source for small applications.  The temperature differences that we have, particularly in Winter, offer possibilities for future exploration.  The video guy in the link above, is Bill Nye (The Science Guy) of Science Education fame.  If you’re either too old or young to remember him it would be worth your while to Wiki him.

A Mendocino Motor is just too beautiful to ignore.  The combination of Solar energy and magnetic levitation is very exciting.  An example of the kind of Engineering you would expect to find in Mendocino County, California, USA!

I have a rotating wheel device using a Memory Wire for power.  This wheel a very fragile structure requiring short (duration and intensity) use.  A robust application is the Muscle Wire (spring) robotic muscle simulation.  This video discusses in flowery expressions the possible future where Nitinol plays a primary role in energy production. While this video attempts to explain the atomic structure responsible for the memory ability.

The possible uses are so numerous as to be “low hanging fruit”.

All of these approaches hold promise of ‘scaling up’, but in their present state, there are trade-offs that are not that easy to make.

And … don’t forget the Dyson Sphere:

 

Here are videos of my explorations thus far:

 

Laboratory Test Equipment

Leave a comment

I have been attracted to the numerous laboratory test equipment that I came in contact with while being an engineer, but couldn’t justify purchasing them.  Recently I came across reasonably priced ‘build your own’ kits, and have enjoyed playing with them:

PowerSupply

The Power Supply above will provide 0 to 12 volts DC to the red & black clips, added features are a continuity tester with buzzer (green), a square wave generator (yellow), and a logic probe (white).  And all for about $14.

SignalGenerator

The signal generator above will produce square, sine, or triangular waves with frequencies from 1 to 1MHz.  For approximately $13.

Oscilloscope

The oscilloscope above also functions as a frequency counter and will display a single channel input.  For approximately $20.

ComponentTesting

The component tester above will measure resistance, induction, capacitance, frequency and voltage while testing transistors.  Additionally it will generate square waves and a pulse width modulation signal.  And all for approximately $15.

I’ve had the fun of building them and am now looking for uses.

Play NIM

Leave a comment

For my Christmas gift to young people next year (2020), I plan to put together a Play Nim game package.  The Box will be one that I produce with my new laser cutting skills.  The instructions and explanation will be the following:

PlayNIM

Since High School days I’ve had a fascination with the game of NIM.  I’ve tried on several occasions to write a program to play the ‘perfect game’, but for one reason or another didn’t make it.

Recently I was having fun with a Arduino MEGA 2560 microcontroller.  I started with a UNO board but quickly found that it didn’t have enough inputs for my needs.  The MEGA has 54 digital inputs and 16 analogs, that should hold me for awhile.

Now I’ve done it!  Here is a picture of the hardware side of the solution:

PlayNIM

The software side is harder to show, but it works!  Sorry, but this is all I have to show:

NimCode1

NimCode2

NimCode3

NimCode4

NimCode5

NimCode6

NimCode7

Older Entries