Calls to slow or rethink US surveillance practices and autonomous weapons gained urgency after new public remarks questioned whether key limits received enough scrutiny. The comments, from engineer and policy commentator Caitlin Kalinowski, criticized warrantless monitoring of Americans and the prospect of machines making lethal decisions without a human sign-off. Her warning taps into ongoing fights in Washington and within the tech and defense sectors over safety, oversight, and the rule of law.
“Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,” Caitlin Kalinowski said.
At stake are two high‑impact issues. One concerns how intelligence agencies handle data that can include Americans’ communications. The other centers on how far the military should go in delegating decisions to software and sensors during combat. Both questions have drawn bipartisan attention and sustained pushback from civil liberties groups, veterans, and AI researchers.
Why the Surveillance Fight Is Back
US surveillance law allows the government to collect foreign intelligence without a traditional warrant. In practice, that can pull in messages or data that involve Americans. Supporters say the programs help disrupt plots and track threats at speed. Critics argue that back‑end searches of that data should require a judge’s approval to protect privacy.
Lawmakers have pressed agencies to tighten rules and improve auditing. Inspectors general have flagged compliance lapses in recent years, fueling demands for clearer safeguards and more transparent reporting to Congress. Privacy advocates argue that stronger limits would not block legitimate national security work. They want clear red lines for when searches must involve a court.
Former officials counter that extra hurdles can slow time‑sensitive work. They argue that internal checks, training, and penalties for misuse can address concerns without risking intelligence gaps. The dispute often comes down to how to measure risk: the risk of overreach versus the risk of missing a threat.
Autonomous Weapons and the Human Role
The debate over lethal autonomy hinges on whether a machine should ever select and engage a target without a human decision. The Pentagon maintains that commanders must exercise appropriate human judgment over the use of force. It requires high‑level reviews for new systems and testing to reduce the chance of unintended harm.
Technologists warn that real‑world conditions can break lab assumptions. Sensor errors, spoofing, and data drift can mislead a model in the field. AI ethicists point to accountability gaps if no person is clearly responsible for a strike. Human rights groups urge a binding rule that a human must authorize any use of lethal force.
Defense planners say autonomy can improve protection for troops and speed in complex environments. They cite edge cases, like swarming drones or fast‑moving air defenses, where reaction times matter. They argue that well‑designed systems with strict rules of engagement can reduce civilian harm compared with fatigued or overwhelmed operators.
Industry, Research, and Public Concerns
AI researchers urge “human on the loop” controls, fail‑safes, and clear escalation chains. Some military partners have begun publishing test results and red‑team findings to build trust. Civil society groups push for independent audits and public reporting on incidents, near‑misses, and remedial steps.
Kalinowski’s remarks reflect a broader unease across tech and policy circles. The fear is that practice is outrunning policy. Engineers want clearer standards for acceptable use. Lawyers want stronger oversight and due process. Commanders want tools that work under stress and scrutiny.
- Privacy advocates seek court oversight for searches involving Americans’ data.
- Defense officials emphasize testing, training, and command accountability.
- Researchers call for transparency, incident reporting, and rigorous red‑teaming.
What To Watch Next
Several trends will shape the next phase. Congress is weighing tighter guardrails on surveillance queries that touch Americans. Agency leaders are expanding compliance programs and audits. Courts may see more challenges that test how far statutory authorities reach.
On autonomy, allied nations are negotiating common principles for use and export. Trials of human‑machine teaming will expand, with attention on fail‑safe design, explainability for operators, and risk assessments before deployment. Industry groups are drafting safety cases and proposing certification paths similar to those used in aviation and medical devices.
The central question is how to align speed with accountability. Kalinowski’s warning draws a clear line: human judgment and judicial oversight are not optional in a free society. Whether policymakers can harden those norms into enforceable rules will define public trust in both intelligence work and military AI.
For now, momentum favors more transparency, stronger audits, and explicit human authorization for lethal use of force. The test will be keeping those promises when real‑world pressure arrives.






