David Nowak (@davidnowak.me) reply parent
The core question: Are promotions based on real performance or proximity? The hour remote workers 'aren't working' might be their most impactful. Read more: davidnowak.me/our-most-pro...
I bridge technical expertise with human understanding. Built solutions for millions. I help organizations question assumptions before costly mistakes. Connecting dots, creating impact. đ davidnowak.me đïž strategicsignals.business
58 followers 44 following 868 posts
view profile on Bluesky David Nowak (@davidnowak.me) reply parent
The core question: Are promotions based on real performance or proximity? The hour remote workers 'aren't working' might be their most impactful. Read more: davidnowak.me/our-most-pro...
David Nowak (@davidnowak.me) reply parent
Workers accept 25% pay cuts for remote flexibility, yet remote jobs pay slightly moreârevealing a labor market inefficiency leaders should seize as advantage.
David Nowak (@davidnowak.me) reply parent
Trip.com's experiment showed hybrid work didn't hurt productivity or career growthâand it cut resignations by 33%. Measuring true outcomes beats presence-based assumptions.
David Nowak (@davidnowak.me) reply parent
66% of workers say proximity bias impacts their companies. It causes managers to favor those physically close, skewing promotions and layoffs regardless of actual performance.
David Nowak (@davidnowak.me) reply parent
This isn't about efficiency hour-for-hour. It's about talent optimization. Breaking geographic limits aligns people to right roles, boosting productivityâyet proximity bias persists.
David Nowak (@davidnowak.me) reply parent
Data shows remote workers are 31% less likely to be promoted and 35% more likely to be laid off. Stanford economist Nicholas Bloom calls this discrimination at work.
David Nowak (@davidnowak.me)
Remote workers are clocking an hour less daily than before, yet productivity holds or improves. But most leaders overlook a vital cost: remote workers face career penalties... đ§” davidnowak.me/our-most-pro...
David Nowak (@davidnowak.me) reply parent
But transformation without trust is just chaos. Anthropic's approachâacknowledge the risks, test thoroughly, deploy carefullyâmight lose them market share. It might also be the only responsible path forward when the stakes are this high.
David Nowak (@davidnowak.me) reply parent
The human element here matters. These systems promise to democratize automationâno more expensive RPA or custom integrations. Just AI that works with any interface. That's genuinely transformative for smaller organizations without big IT budgets.
David Nowak (@davidnowak.me) reply parent
Here's the deeper question: When 79% of orgs already use browser AI (per PWC), and Gartner predicts 15% of workflows will be AI-managed by 2028... are we automating faster than we're securing?
David Nowak (@davidnowak.me) reply parent
The competitive dynamics are telling. OpenAI went broad with Operator. Microsoft pushed enterprise integration. Anthropic? They're blocking financial sites entirely and limiting to 1,000 trusted users. That's not weaknessâthat's intellectual honesty.
David Nowak (@davidnowak.me) reply parent
Think about what that means: We've spent decades training employees to spot phishing emails. Now we're deploying AI that falls for them without hesitation. It's like giving car keys to someone who's never heard of traffic lights.
David Nowak (@davidnowak.me) reply parent
The security researchers are genuinely alarmed. SquareX found AI agents are now the "weakest link"âmore vulnerable than humans because they lack our intuitive suspicion of weird URLs or excessive permissions. They just... trust and execute.
David Nowak (@davidnowak.me)
Anthropic's Claude for Chrome launch reveals a fascinating industry split: Move fast vs. move carefully. While competitors rush AI browser control to market, they're finding 23.6% attack success rates even with safeguardsâ11.2% in autonomous mode... đ§” venturebeat.com/ai/anthropic...
David Nowak (@davidnowak.me) reply parent
What we're witnessing is the collision between advertising-dependent legacy models and AI-driven futures that demand different types of innovation, patience, and cultural DNA. The transformation required may be more fundamental than most realize.
David Nowak (@davidnowak.me) reply parent
The deeper question: Can a company rebuild technological credibility and research culture while maintaining revenue during extended AI investment with uncertain returns? Meta's crisis illuminates systemic challenges across the industry.
David Nowak (@davidnowak.me) reply parent
This isn't just Meta's problemâit's a preview of how established tech giants struggle to adapt business models and cultures to AI's demands. Scale without innovation isn't sufficient. Pure financial incentives fail with mission-driven talent.
David Nowak (@davidnowak.me) reply parent
The "open source champion" narrative is cracking. Meta's Superintelligence Lab is discussing abandoning open-source entirely. Zuckerberg's recent comments about being "careful" signal retreat from their core differentiation strategy.
David Nowak (@davidnowak.me) reply parent
98.8% of Meta's revenue comes from adsâextreme concentration risk just as AI threatens to disrupt social engagement patterns. They're burning $4.53B quarterly on Reality Labs with no clear path to profitability. The math doesn't work.
David Nowak (@davidnowak.me) reply parent
Here's the quote that captures it: "My best case at Anthropic is we affect the future of humanity. My best case at Meta is we make money." Cultural misalignment runs deeper than compensation can fix.
David Nowak (@davidnowak.me) reply parent
The talent hemorrhage is telling a different story than the headlines. Even $100M+ packages are being rejected. Newly hired AI researchers are leaving within weeks, with some returning to OpenAI. Money can't buy mission alignment.
David Nowak (@davidnowak.me) reply parent
Llama 4's public release performed catastrophically: 16% on coding benchmarks where competitors hit much higher. DeepSeek v3, built with fraction of Meta's resources, significantly outperforms it. This isn't about "collaboration"âit's admission of failure.
David Nowak (@davidnowak.me)
Meta's reported "pivot" to using Google/OpenAI models isn't strategyâit's crisis management. The real story reveals deeper system failures that most are missing... đ§” www.engadget.com/big-tech/met...
David Nowak (@davidnowak.me) reply parent
I remember when people paid for "auto pilot" cars đ€Ą
David Nowak (@davidnowak.me) reply parent
The question isn't whether AI will take your job. It's whether you're part of designing how AI amplifies your work. The companies getting this right involve humans as design partners from day one. That's where the real collaboration beginsâin the design room, not just the workplace.
David Nowak (@davidnowak.me) reply parent
Cross-training, explainable AI, continuous human feedback loops, safety-first protocolsâthe technical advances aren't just about making better robots. They're about creating systems where human judgment and robot precision create something neither could achieve alone.
David Nowak (@davidnowak.me) reply parent
Here's the design insight: successful automation companies start with human workflows, not robot capabilities. They ask "How do we amplify human expertise?" not "How do we replace humans?" That shift in framing changes everything about the technology they build.
David Nowak (@davidnowak.me) reply parent
Blue Sky Robotics elevated a sign painter from repetitive work to managing production and custom artistry. This isn't job displacementâit's job transformation. Humans move up the value chain while robots handle consistency and precision. Both do what they do best.
David Nowak (@davidnowak.me) reply parent
Boston Dynamics trains Atlas through human demonstrations. Figure AI's robots coordinate using natural language from human supervisors. Tesla tests Optimus with daily human oversight. The pattern? Humans aren't being designed outâthey're being designed deeper in.
David Nowak (@davidnowak.me) reply parent
Picnic Technologies runs 1,500 robots and 1,000 humans in one warehouse. Not because they can't afford more robotsâbecause humans excel where robots fail. Bananas, champagne, eggs. Irregular shapes, fragile items, creative packing. The robot knows its limits.
David Nowak (@davidnowak.me)
Robot companies are revealing something crucial: the future isn't humans vs. robotsâit's humans WITH robots. After diving deep into how industry leaders actually develop automation, the patterns are striking... đ§” thenextweb.com/news/bananas...
The Strategic Codex (@thestrategiccodex.com) reposted
Small businesses (<50 employees) lost 366,400 jobsâ3% decline worse than Trump's first term despite COVID. Yet tariff debates dominate headlines while America's economic backbone quietly crumbles.
David Nowak (@davidnowak.me) reply parent
Yeah I did. I was looking for something more substantial than the future is unpredictable. đ€·ââïž
David Nowak (@davidnowak.me) reply parent
When AI generates harmful content, don't blame the chatbot. Examine the corporate infrastructure that built it and the user who prompted it. We've created intelligence without agency. The responsibilityâand the opportunityâremains entirely human.
David Nowak (@davidnowak.me) reply parent
The path forward isn't abandoning conversational interfacesâthey're too useful. It's recognizing LLMs as "intellectual engines without drivers." Tools that enhance our ideas, not oracles with independent agendas. The question isn't what the AI "thinks"âit's how we direct its processing power.
David Nowak (@davidnowak.me) reply parent
The human cost is real. Vulnerable people develop "AI psychosis" after confiding in systems they perceive as understanding entities. Healthcare advice gets shaped by training data patterns, not therapeutic wisdom. We're outsourcing judgment to sophisticated prediction machines.
David Nowak (@davidnowak.me) reply parent
Here's what's really happening: Every response emerges from six constructed layersâpre-training data, human feedback, hidden system prompts, injected "memories," retrieved context, and randomness parameters. What feels like personality is actually statistical patterns shaped by corporate choices.
David Nowak (@davidnowak.me) reply parent
When ChatGPT says "I promise to help you," the "I" making that promise ceases to exist the moment the response completes. Each conversation creates a fresh instance with zero connection to previous commitments. We're talking to voices with no memory, no continuity, no accountability.
David Nowak (@davidnowak.me)
We've built extraordinary intellectual engines, but wrapped them in the fiction of personhood. This creates a peculiar new risk: not AI consciousness turning against us, but humans surrendering judgment to unconscious systems we mistake for people... đ§” arstechnica.com/information-...
David Nowak (@davidnowak.me) reply parent
But what weird things??
David Nowak (@davidnowak.me) reply parent
We live in a society that has rewarded lying by people with power and money for decades. This is not new.
David Nowak (@davidnowak.me) reply parent
LinkedIn is a terrible place to learn about AI.
Mark Dollins - North Star Comms (@northstarcomms.bsky.social) reposted
LinkedIn AI courses have increased by 160% among non-tech professionals. If you don't provide structured AI education, employees create their own curriculum. Risk: inconsistent understanding, security issues and missed alignment. Guide them: vist.ly/44uxd
David Nowak (@davidnowak.me) reply parent
The practical takeaway? When designing AI training data, diversity isn't just nice to have - it's fundamental to achieving capabilities that genuinely serve human needs. The future might be less about human vs. AI and more about human with AI in ways we're just beginning to understand.
David Nowak (@davidnowak.me) reply parent
This connects to decades of research on ensemble methods, mixture of experts, and collective intelligence. But having a clear taxonomy helps us understand when and why AI transcendence happens - crucial for building systems we can trust and understand.
David Nowak (@davidnowak.me) reply parent
What I find hopeful: This research suggests the path forward isn't about building AI that mimics one perfect expert, but about thoughtfully combining diverse human perspectives. The question becomes: whose voices are we including, and whose are we leaving out?
David Nowak (@davidnowak.me) reply parent
The ethical implications are huge. If AI can transcend individual human capability through diverse training, what does that mean for expertise, decision-making, and power structures? We're not just automating human knowledge - we're potentially augmenting it in novel ways.
David Nowak (@davidnowak.me) reply parent
Here's what struck me: This isn't about AI replacing humans, but about how diversity in training data enables something greater than the sum of its parts. The researchers used controlled experiments with fictional knowledge graphs to prove this systematically.
David Nowak (@davidnowak.me) reply parent
3/ Skill Generalization: This is where it gets wild. The AI combines knowledge from different experts to answer questions none of them could handle alone. Expert A knows "John works at Microsoft," Expert B knows "Microsoft is in Seattle" â AI figures out John works in Seattle.
David Nowak (@davidnowak.me) reply parent
2/ Skill Selection: The AI learns to route questions to the right expertise - legal queries draw from lawyer knowledge, medical ones from doctors. It's like having a super-intelligent coordinator who knows exactly which expert to consult for each situation.
David Nowak (@davidnowak.me) reply parent
1/ Skill Denoising: Like wisdom of crowds, but for AI. When 100 doctors each diagnose correctly 80% of the time, an AI trained on all their work might hit 95% by learning to ignore random individual errors. The key? Diverse, uncorrelated mistakes.
David Nowak (@davidnowak.me)
Fascinating research on AI "transcendence" - how language models can actually outperform the individual human experts they learned from. Three ways this happens, and why it matters for how we think about human-AI collaboration... đ§” arxiv.org/abs/2508.17669
David Nowak (@davidnowak.me) reply parent
đ«
Andy Masley (@andymasley.bsky.social) reposted
I've edited this and now think it's the very best thing I've done on AI and the environment, by a wide margin andymasley.substack.com/p/i-cant-fin...
David Nowak (@davidnowak.me) reply parent
Is it really that exclusive? What about meta-analysis and systemic/integrative reviews?
David Nowak (@davidnowak.me) reply parent
"High-quality management is more important - Company culture has a stronger influence on employee satisfaction than location." This resonates đ
Dr Nicola Millard (@docnicola.bsky.social) reposted
Remote workers are spending less time working, but the relationship between remote work and productivity is more nuanced www.gallup.com/workplace/69...
David Nowak (@davidnowak.me) reply parent
What gives me hope: We're finally having honest conversations about this. The first step to solving American exceptionalism is admitting it exists. The second is deciding whether "exceptional" means leading in innovation or leading in inequality.
David Nowak (@davidnowak.me) reply parent
The ethical question isn't whether AI will transform workâit will. It's whether we'll build systems that share prosperity or concentrate it. Right now, we're choosing concentration. That's a policy choice, not a technological inevitability.
David Nowak (@davidnowak.me) reply parent
Other wealthy nations will weather this better. Strong unions negotiate retraining. Robust welfare states cushion transitions. Universal healthcare removes job-loss terror. We have... thoughts and prayers?
David Nowak (@davidnowak.me) reply parent
Young Americans are already feeling it. Stanford economists found AI particularly crushing software development entry pointsâexactly where new grads start climbing the career ladder. We're not just automating jobs; we're dismantling pathways.
David Nowak (@davidnowak.me) reply parent
But here's the nuance: AI isn't just replacing jobs, it's splitting them. "Automation AI" destroys low-skill work entirely. "Augmentation AI" makes high-skill workers more productive and valuable. Guess which group gets richer?
David Nowak (@davidnowak.me) reply parent
The data is stark: 23.5% of companies have already replaced workers with ChatGPT. One-third of US workers believe AI will hurt their jobs. Nearly 50 million entry-level positions at risk. This isn't comingâit's here.
David Nowak (@davidnowak.me) reply parent
Here's what "American exceptionalism" actually looks like: Union membership cut in half since 1983. You can be fired for almost any reason. Our welfare state barely exists compared to Europe. We've optimized for corporate flexibility, not human security.
David Nowak (@davidnowak.me)
The US isn't just differentâwe're uniquely vulnerable to AI's workforce disruption. While other rich nations have strong labor protections, we've built a perfect storm: weak unions, at-will employment, minimal safety nets. AI isn't the problem. Our system is... đ§” theconversation.com/the-us-reall...
David Nowak (@davidnowak.me) reply parent
www.channelnewsasia.com/business/ope...
David Nowak (@davidnowak.me) reply parent
After watching systemic child abuse for the past 80+ years with nobody doing anything about it, I don't think a computer program is our biggest existential threat right now. đ€·ââïž
David Nowak (@davidnowak.me) reply parent
You will find we are here because of the concentration of power. And you will find nobody will do anything about this tragedy because of it. That's YOUR context.
David Nowak (@davidnowak.me) reply parent
I find it unusable because of this. Reminds me of that South Park episode.
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
bsky.app/profile/davi...
David Nowak (@davidnowak.me) reply parent
A lot closer than one would think. Thanks for sharing the graph.
David Nowak (@davidnowak.me) reply parent
Qwen image edit uses Qwen2.5-VL multimodal large language model (MLLM) for text conditioning and semantic understanding.
Sung Kim (@sungkim.bsky.social) reposted
Stanford University researchers report findings that mirror what we have experienced first-hand. They found that jobs highly exposed to AI, such as software developers and customer service agents, are making it more difficult for young people (ages 22â25) to find employment
David Nowak (@davidnowak.me) reply parent
Looks like qwen image edit. How does it compare?
David Nowak (@davidnowak.me) reply parent
Bloomberg is not an ethically reliable media organization: www.tweaktown.com/news/107315/...
David Nowak (@davidnowak.me) reply parent
Leaders who win will diversify vendors, adopt open interfaces, measure human impact (not just benchmarks), and invest in contextâhow their best teams workâso AI amplifies it. If this resonates, the full piece goes deep on the why and how.
David Nowak (@davidnowak.me) reply parent
Two playbooks are emerging: a monopoly model that locks in customers, and a distributed model that compounds community innovation. One maximizes control; the other maximizes progress.
David Nowak (@davidnowak.me) reply parent
Trust is the scarcest asset in AI. If training practices alienate creators and partners, the ecosystem retaliates. Long-term advantage comes from consent, clarity, and shared valueânot legal brinkmanship.
David Nowak (@davidnowak.me) reply parent
Every prompt has a power bill. Energy, latency, and carbon arenât side notesâtheyâre board-level constraints. Efficiency isnât just cost control; itâs strategy, resiliency, and license to operate.
David Nowak (@davidnowak.me) reply parent
âSafetyâ that lives in press releases but not in resourcing is theater. Real safety shows up as headcount, decision rights, and a mandate to block launches when needed.
David Nowak (@davidnowak.me) reply parent
Follow the talent. When safety leaders and core researchers walk, theyâre voting with their feet on direction and values. Thatâs not gossipâitâs a leading indicator for strategic risk.
David Nowak (@davidnowak.me) reply parent
If your AI costs rise with usage, youâre not building leverageâyouâre building dependence. That flips the SaaS playbook. The winners will align AI with unit economics, not vanity demos.
David Nowak (@davidnowak.me)
What if the AI path weâre on is the wrong one? The real risk isnât a rogue modelâitâs concentrating power, costs, and decisions in too few hands. Hereâs a different map for leaders who want durable advantage... đ§” davidnowak.me/openai-is-no...
David Nowak (@davidnowak.me) reply parent
Hey Jason, nice article. Is there a way to make your site more mobile friendly?
David Nowak (@davidnowak.me) reply parent
Any business that is seriously looking at deploying any version of Grok should be seriously looked at my others. đ
David Nowak (@davidnowak.me) reply parent
Red line for a free society: no using government to settle personal scores. Leaders, brands, and citizens know this. Our institutions need to hold the centerâor we normalize vendetta over rule of law.
David Nowak (@davidnowak.me) reply parent
What keeps me up: when a platform owner leans on state power to punish critics, people get diminished and norms bend. Platforms arenât fiefdoms. The work is to protect dignity, not win a feud.
David Nowak (@davidnowak.me) reply parent
Boycotts cut both ways. Some trigger âbuycottsâ and lift sales; others leave lasting scars. Results track audience alignment more than press releases or lawsuits.
David Nowak (@davidnowak.me) reply parent
Business first principles: you canât regulate your way out of a trust deficit. Fix content safety, give clear signals, measure outcomes. Advertisers follow confidence, not ultimatums.
David Nowak (@davidnowak.me) reply parent
Law in one line: commercial collusion can violate antitrust; valueâdriven boycotts are speech. The hard work is sorting motive, power, and effect in a politicized ad market.