handle
stringclasses 20
values | did
stringclasses 20
values | uri
stringlengths 56
70
| cid
stringlengths 59
59
| text
stringlengths 60
1.48k
| originalText
stringlengths 1
1.4k
| replyCount
int64 0
5
| createdAt
stringdate 2009-10-15 18:56:15
2025-06-21 21:20:01
| indexedAt
stringdate 2009-10-15 18:56:15
2025-06-21 21:20:04
|
---|---|---|---|---|---|---|---|---|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfkx4lwles2q
|
bafyreigewc7taf3lhpipqz2vngeibeoru6aa7m6m346rir5nukaoplcig4
|
<start>It has a huge context window and is excellent at synthesizing lots of literature or qualitative data. Example below.
@emollick.bsky.social
|
It has a huge context window and is excellent at synthesizing lots of literature or qualitative data. Example below.
| 0 |
2025-01-12T19:24:00.451Z
|
2025-01-12T19:24:02.623Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfkwysj5fc2q
|
bafyreiacurziasl4bjqfojfqnvuc6z7wr6kqx3bvn5pduzbw4hnt5t3w2i
|
<start>And if it turns out to be useless for you, that is instructive as well.
@emollick.bsky.social
|
And if it turns out to be useless for you, that is instructive as well.
| 0 |
2025-01-12T19:21:53.138Z
|
2025-01-12T19:21:53.421Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfkwsuhe7c2q
|
bafyreidybstg4t7i7c5rjyg6o7i5ybv6ozeoaqspwvd7frdlii2iepxmia
|
<start>If you are an academic, it can be instructive to work on a paper with AI. Pretend you are working with a grad student & see what happens.
Generally o1 is best for well-defined heavy intellectual tasks, Gemini for synthesizing lots of text, and Claude for writing & theorizing. This varies by field. And if it turns out to be useless for you, that is instructive as well.
@emollick.bsky.social
|
If you are an academic, it can be instructive to work on a paper with AI. Pretend you are working with a grad student & see what happens.
Generally o1 is best for well-defined heavy intellectual tasks, Gemini for synthesizing lots of text, and Claude for writing & theorizing. This varies by field.
| 1 |
2025-01-12T19:18:33.849Z
|
2025-01-12T19:18:34.133Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfjleu5m6c2x
|
bafyreif367jg2bkp3q2k53cnjw7nk3lm6qxdv7vztclnot7ehjzjxkhh7q
|
<start>Trying a little experiment: "Hey, Claude, turn the first four stanzas of The Lady of Shalott into a film by describing 8 second clips..."
I pasted those into veo 2 verbatim & overlayed the poem for reference (I also asked for one additional scene revealing the Lady, again using the verbatim prompt) Full poem: www.poetryfoundation.org/poems/45359/...
@emollick.bsky.social
|
Trying a little experiment: "Hey, Claude, turn the first four stanzas of The Lady of Shalott into a film by describing 8 second clips..."
I pasted those into veo 2 verbatim & overlayed the poem for reference (I also asked for one additional scene revealing the Lady, again using the verbatim prompt)
| 1 |
2025-01-12T06:21:12.870Z
|
2025-01-12T06:21:14.523Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfjj6rjyms2x
|
bafyreic2dirjzijjzwxyt2kxx7yckjhdjr4ojnsuf3ugtsy5hebrnid6yy
|
<start>We don't talk talk enough about how boredom is now scarce. Boredom sparks creativity but only when people are open to new experiences & believe that they are in control of what happens to them
...but for the most part, it makes people do whatever they can for amusement including acting sadistically
@emollick.bsky.social
|
We don't talk talk enough about how boredom is now scarce. Boredom sparks creativity but only when people are open to new experiences & believe that they are in control of what happens to them
...but for the most part, it makes people do whatever they can for amusement including acting sadistically
| 0 |
2025-01-12T05:42:01.312Z
|
2025-01-12T05:42:03.927Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfimhxikc226
|
bafyreicweoloe4g6dwpi4mbt3i3xisjwspspipseasl52lc4nptqubd6dm
|
<start>The Far Honk of the World Claude suggested Master and Command Gander when asked for some more puns (the rest were all mine), which was pretty good.
@emollick.bsky.social
|
The Far Honk of the World
| 1 |
2025-01-11T21:08:11.041Z
|
2025-01-11T21:08:11.931Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfij3qpeqc26
|
bafyreibpzuic2vmoaknus6hfkjj3dijuoxdkx4vw4g2v5uct7rpgp4z5fm
|
<start>His whole post is full of interesting insights: simonwillison.net/2024/Dec/31/...
@emollick.bsky.social
|
His whole post is full of interesting insights: simonwillison.net/2024/Dec/31/...
| 0 |
2025-01-11T20:07:40.045Z
|
2025-01-11T20:07:42.431Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfij27ajek26
|
bafyreih7ea37jyaju6puxecyqie5zy2zgqfwq4wqem4z5ipulhxu67ingi
|
<start>On the environmental cost of AI, this by @simonwillison.net w is useful. The models we use today have become orders of magnitude more efficient in the last year, and each prompt likely uses much less energy than many seem to think.
But the aggregate impact of ever-larger data centers is very real. His whole post is full of interesting insights: simonwillison.net/2024/Dec/31/...
@emollick.bsky.social
|
On the environmental cost of AI, this by @simonwillison.net w is useful. The models we use today have become orders of magnitude more efficient in the last year, and each prompt likely uses much less energy than many seem to think.
But the aggregate impact of ever-larger data centers is very real.
| 1 |
2025-01-11T20:06:48.176Z
|
2025-01-11T20:06:49.925Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfidtwojj226
|
bafyreidm6dvi32ixplmzkbln6r7ykrpch4dmrgyghiutyuycridqq6rdzy
|
<start>Yes, this is from Master and Commanduck It stars Russell Crow
@emollick.bsky.social
|
Yes, this is from Master and Commanduck
| 1 |
2025-01-11T18:33:49.165Z
|
2025-01-11T18:33:49.318Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfgzyh5lm226
|
bafyreigjsvxj475aq7fbpqctdchkbj4eucn5s5dpddzhkugpjdxoaa6egy
|
<start>APRIL - 1805
A GIANT CANADIAN GOOSE IS MASTER OF EUROPE
ONLY THE BRITISH FLEET STANDS BEFORE HIM
OCEANS ARE NOW BATTLEFIELDS Yes, this is from Master and Commanduck
@emollick.bsky.social
|
APRIL - 1805
A GIANT CANADIAN GOOSE IS MASTER OF EUROPE
ONLY THE BRITISH FLEET STANDS BEFORE HIM
OCEANS ARE NOW BATTLEFIELDS
| 1 |
2025-01-11T06:04:43.496Z
|
2025-01-11T06:04:44.516Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfgz562ew226
|
bafyreifff5xykjgihyfwf2bchpn4memyvvzfqecngsosll6ibpnbicjb34
|
<start>Washington crosses the Delaware on waterfowl, just as history teaches us APRIL - 1805
A GIANT CANADIAN GOOSE IS MASTER OF EUROPE
ONLY THE BRITISH FLEET STANDS BEFORE HIM
OCEANS ARE NOW BATTLEFIELDS
@emollick.bsky.social
|
Washington crosses the Delaware on waterfowl, just as history teaches us
| 1 |
2025-01-11T05:49:27.985Z
|
2025-01-11T05:49:28.817Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfgyvowfn226
|
bafyreigu7knunf3jkeqato2oezfwxmbemkcscv772ijb2fe75xjl3j4ez4
|
<start>The battle of Jutland, but there is a swan boat in the battleline. Washington crosses the Delaware on waterfowl, just as history teaches us
@emollick.bsky.social
|
The battle of Jutland, but there is a swan boat in the battleline.
| 1 |
2025-01-11T05:45:17.245Z
|
2025-01-11T05:45:18.417Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfgyqxa4ps26
|
bafyreiaoraln6tvktpuuaidm3znwbd65cchknsddawqhbppjpohqxj6qq4
|
<start>The Romans fight Carthage at Cannae, but the Carthaginians have Giant War Chickens The battle of Jutland, but there is a swan boat in the battleline.
@emollick.bsky.social
|
The Romans fight Carthage at Cannae, but the Carthaginians have Giant War Chickens
| 1 |
2025-01-11T05:42:38.180Z
|
2025-01-11T05:42:39.217Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfgyph5r6k26
|
bafyreifdjsretetmyihk3sgnvh6fumo22qdovmepkil5qx4wkcchvngndq
|
<start>The British infantry at Waterloo during a French cavalry charge, the redcoats are all wearing rubber ducks on their heads.
Veo2 showing history as it was. The Romans fight Carthage at Cannae, but the Carthaginians have Giant War Chickens
@emollick.bsky.social
|
The British infantry at Waterloo during a French cavalry charge, the redcoats are all wearing rubber ducks on their heads.
Veo2 showing history as it was.
| 1 |
2025-01-11T05:41:47.771Z
|
2025-01-11T05:41:48.721Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfgycza4mk26
|
bafyreicyvrdxq6f5wkit7top2ygvz7mu7e55voxwwy27qgxxzm6mzsmu7i
|
<start>Lots of other very clever stuff in the paper: arxiv.org/pdf/2501.045...
@emollick.bsky.social
|
Lots of other very clever stuff in the paper: arxiv.org/pdf/2501.045...
| 0 |
2025-01-11T05:34:50.515Z
|
2025-01-11T05:34:50.815Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfgybpxopk26
|
bafyreifpwrtvjzwzv3wp5e3pqi3ftdzz36awyy5btnwvds7o5omrtolpzu
|
<start>Paper shows very small LLMs can match or beat larger ones through 'deep thinking' - evaluating different solution paths - and other tricks. Their 7B model beats o1-preview on complex math by exploring 64 different solutions & picking the best one.
Test-time compute paradigm seems really fruitful. Lots of other very clever stuff in the paper: arxiv.org/pdf/2501.045...
@emollick.bsky.social
|
Paper shows very small LLMs can match or beat larger ones through 'deep thinking' - evaluating different solution paths - and other tricks. Their 7B model beats o1-preview on complex math by exploring 64 different solutions & picking the best one.
Test-time compute paradigm seems really fruitful.
| 1 |
2025-01-11T05:34:07.244Z
|
2025-01-11T05:34:09.224Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lffyhjpbfs22
|
bafyreicuvsyxhy2rcns7m5ewdzc43vlp2rvcengc74pnzws2n6k452ohhm
|
<start>Also includes the ultimate version of "otter on a plane using wifi" - my old test for AI image models that is now obsolete because this is a trivial thing for all image generators. Thus, I turned it into a video with veo 2.
@emollick.bsky.social
|
Also includes the ultimate version of "otter on a plane using wifi" - my old test for AI image models that is now obsolete because this is a trivial thing for all image generators. Thus, I turned it into a video with veo 2.
| 0 |
2025-01-10T20:04:42.267Z
|
2025-01-10T20:04:43.428Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lffwcou7k222
|
bafyreih2fw33vlebatcaa6vwel6h5hxuw6tenx7kqfl4wfvfoujszkylu4
|
<start>The interesting piece is not the financial trading (I wouldn't make bets on the AI's advice), but the emerging model of using agentic systems in quasi-organizational configurations. github.com/virattt/ai-h...
@emollick.bsky.social
|
The interesting piece is not the financial trading (I wouldn't make bets on the AI's advice), but the emerging model of using agentic systems in quasi-organizational configurations.
| 1 |
2025-01-10T19:26:12.417Z
|
2025-01-10T19:26:12.827Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lffw5lyf7s22
|
bafyreif2sihgtwabye5sutfer5ju5slbulua434nia7psug6gikjgrumeq
|
<start>This is neat. I'm playing with Virat Singh's open source AI "hedge fund" where different "agents" each performs a different analysis and gives them to a portfolio manager agent to make final calls. Also does back-testing to see how well it performs.
.
Early days (don't trade on it), but interesting The interesting piece is not the financial trading (I wouldn't make bets on the AI's advice), but the emerging model of using agentic systems in quasi-organizational configurations.
@emollick.bsky.social
|
This is neat. I'm playing with Virat Singh's open source AI "hedge fund" where different "agents" each performs a different analysis and gives them to a portfolio manager agent to make final calls. Also does back-testing to see how well it performs.
.
Early days (don't trade on it), but interesting
| 1 |
2025-01-10T19:23:21.631Z
|
2025-01-10T19:23:23.620Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lffhozstgs22
|
bafyreidiazuo6ggu77eljvnldjezhsi3isq46pszo5qmafkm5bpgsmspj4
|
<start>There has been a definite shift in recent weeks where insiders in the various AI labs are suggesting that very intelligent AIs are coming very soon.
I wrote a bit about why this might be happening and what we can take away from their apparent confidence. www.oneusefulthing.org/p/prophecies... Also includes the ultimate version of "otter on a plane using wifi" - my old test for AI image models that is now obsolete because this is a trivial thing for all image generators. Thus, I turned it into a video with veo 2.
@emollick.bsky.social
|
There has been a definite shift in recent weeks where insiders in the various AI labs are suggesting that very intelligent AIs are coming very soon.
I wrote a bit about why this might be happening and what we can take away from their apparent confidence. www.oneusefulthing.org/p/prophecies...
| 1 |
2025-01-10T15:04:40.431Z
|
2025-01-10T15:04:42.023Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfdj5li46c22
|
bafyreighe5vpn5ug5rkot4jptc23sj3uz73n6oxkxabtwj73nv53gouq5e
|
<start>I appreciate Llama just giving up immediately (not that it was wrong)
@emollick.bsky.social
|
I appreciate Llama just giving up immediately (not that it was wrong)
| 0 |
2025-01-09T20:25:22.981Z
|
2025-01-09T20:25:23.326Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfdixq462k22
|
bafyreihrprjb5pukpeoo6teehihzybt52kdvxpabfgxvvxbsun2rtithcq
|
<start>Hey Claude, o1-pro, Llama, and Gemini 2.0
spiral galaxy : cheese :: elephant : I appreciate Llama just giving up immediately (not that it was wrong)
@emollick.bsky.social
|
Hey Claude, o1-pro, Llama, and Gemini 2.0
spiral galaxy : cheese :: elephant :
| 1 |
2025-01-09T20:22:06.500Z
|
2025-01-09T20:22:08.523Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfcyti4ajc22
|
bafyreifsrqylb6iua5muhto2x335mx5jqczvwhsl5fdh4jxrhciuplgxyu
|
<start>Interesting attempt to build an AI agent-based research assistant to automate machine learning paper writing by acting as PhDs, post-docs, & professors working in a typical lab. It doesn't autonomously produce high-level work but it looks promising as a copilot for researchers cutting cost & effort. Paper: arxiv.org/pdf/2501.04227
@emollick.bsky.social
|
Interesting attempt to build an AI agent-based research assistant to automate machine learning paper writing by acting as PhDs, post-docs, & professors working in a typical lab. It doesn't autonomously produce high-level work but it looks promising as a copilot for researchers cutting cost & effort.
| 1 |
2025-01-09T15:33:24.030Z
|
2025-01-09T15:33:25.920Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfblnuk6l222
|
bafyreierlo5kq2qwob2nct6ke22kqc5qmwgmcssnz4fbyqz64xwfpkvtbq
|
<start>It turns out you can give Claude instructions in the forms of lines of poetry from Adrienne Rich or TS Eliot and get very thoughtful analysis of empirical papers as a result.
@emollick.bsky.social
|
It turns out you can give Claude instructions in the forms of lines of poetry from Adrienne Rich or TS Eliot and get very thoughtful analysis of empirical papers as a result.
| 0 |
2025-01-09T02:04:57.352Z
|
2025-01-09T02:04:59.028Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfb2pbprek22
|
bafyreiajccyuwbncmfp52f6cezq3gjawt65xft36gdvndi4oipt6fs2nna
|
<start>Increasingly large food.
All made by me with veo 2, the serious point is to note how impressive the "physics" of these models have become (also the consistency, like the spilled burrito across two different shots), there are some issues, obviously, but big advances. Increasingly small food.
@emollick.bsky.social
|
Increasingly large food.
All made by me with veo 2, the serious point is to note how impressive the "physics" of these models have become (also the consistency, like the spilled burrito across two different shots), there are some issues, obviously, but big advances.
| 1 |
2025-01-08T21:01:31.123Z
|
2025-01-08T21:01:32.415Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lfatqbqn2223
|
bafyreiemoi3iqpvhm2frmo6ywuy5omaptjobztg4dm7ycevnb2fa4iwwxu
|
<start>Given that AI works a lot like people, and that human experts are often the only ones who can judge the quality of AI results, Jensen’s “IT departments as the new HR” is, I think, a mistake
You don’t want IT alone decide how work gets done. You also need managers who understand how to work with AI.
@emollick.bsky.social
|
Given that AI works a lot like people, and that human experts are often the only ones who can judge the quality of AI results, Jensen’s “IT departments as the new HR” is, I think, a mistake
You don’t want IT alone decide how work gets done. You also need managers who understand how to work with AI.
| 0 |
2025-01-08T18:56:48.509Z
|
2025-01-08T18:56:50.220Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lf77635ujs2y
|
bafyreifz3chml7meisikut7ci2nuekwjg4i3ifhn7saqlgx22eq6inezfa
|
<start>Simulated AI hospital where “doctor” agents work w/ simulated “patients” & improve: “After treating around ten thousand patients, the evolved doctor agent achieves a state-of-the-art accuracy of 93.06% on a subset of the MedQA dataset that covers major respiratory diseases” arxiv.org/abs/2405.02957
@emollick.bsky.social
|
Simulated AI hospital where “doctor” agents work w/ simulated “patients” & improve: “After treating around ten thousand patients, the evolved doctor agent achieves a state-of-the-art accuracy of 93.06% on a subset of the MedQA dataset that covers major respiratory diseases” arxiv.org/abs/2405.02957
| 0 |
2025-01-08T03:16:03.031Z
|
2025-01-08T03:16:06.623Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lf6aheko6c2y
|
bafyreie5ecrgljp7d33qnbwvrknrms42boolksler6g5r2fthj225jmfie
|
<start>Seems relevant given the discussion of how fact-checking & information work online that X appears to have rolled out detailed AI interactions on tweets, the first fundamental change in how social media works in awhile
It is a fascinating initial implementation with unclear consequences overall.
@emollick.bsky.social
|
Seems relevant given the discussion of how fact-checking & information work online that X appears to have rolled out detailed AI interactions on tweets, the first fundamental change in how social media works in awhile
It is a fascinating initial implementation with unclear consequences overall.
| 0 |
2025-01-07T18:06:28.894Z
|
2025-01-07T18:06:31.823Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lf63ltrdsc2e
|
bafyreidymuhx652t2vjasawhwspdqqtxvg225ptaevesgmiyltgcjfxhsa
|
<start>I see a lot of (correct) complaints that AGI and agents are badly defined. This problem will not be solved because:
1) AGI and agents inherently rely on comparisons to humans, and we don't have good definitions of human agency or general ability
2) Marketing is incentivized to blur any definitions
@emollick.bsky.social
|
I see a lot of (correct) complaints that AGI and agents are badly defined. This problem will not be solved because:
1) AGI and agents inherently rely on comparisons to humans, and we don't have good definitions of human agency or general ability
2) Marketing is incentivized to blur any definitions
| 0 |
2025-01-07T16:39:30.358Z
|
2025-01-07T16:39:32.120Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lf3dwzdgrs2a
|
bafyreibygrnujukvisp5may4kzaqepg54hvvgsmcyfkycgs76bjvjamed4
|
<start>What percentage of students are you willing to falsely accuse of cheating with AI?
There is a trade-off between false accusations and detection rates for AI. At a 10% false positive rate, detectors find 80% or less of AI content. At a 1% rate most find 60% or less.
Don’t trust AI detectors!
@emollick.bsky.social
|
What percentage of students are you willing to falsely accuse of cheating with AI?
There is a trade-off between false accusations and detection rates for AI. At a 10% false positive rate, detectors find 80% or less of AI content. At a 1% rate most find 60% or less.
Don’t trust AI detectors!
| 0 |
2025-01-06T14:30:56.002Z
|
2025-01-06T14:30:58.675Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lf3chiyavc2a
|
bafyreicwqekv75jxtstpt7vmbnci4ipqdesak7zo634j7ywk4g5t5mszum
|
<start>Reminder for the new semester that you can’t detect AI
Researchers secretly added AI-created papers to the exam pool: “We found that 94% of our AI submissions were undetected. The grades awarded to our AI submissions were on average half a grade boundary higher than that achieved by real students” Paper: journals.plos.org/plosone/arti...
@emollick.bsky.social
|
Reminder for the new semester that you can’t detect AI
Researchers secretly added AI-created papers to the exam pool: “We found that 94% of our AI submissions were undetected. The grades awarded to our AI submissions were on average half a grade boundary higher than that achieved by real students”
| 1 |
2025-01-06T14:04:21.800Z
|
2025-01-06T14:04:24.228Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lf2ok4nk7k2e
|
bafyreifh66muldkhnxrreozvkoqeukmghojz4etqjggo7qcsglzn3wssei
|
<start>No. I talk about both, trying to show where the research is. The difference with many AI critics is that I don’t see evidence that AI is “hype” that will disappear. bsky.app/profile/emol...
@emollick.bsky.social
|
No. I talk about both, trying to show where the research is. The difference with many AI critics is that I don’t see evidence that AI is “hype” that will disappear. bsky.app/profile/emol...
| 0 |
2025-01-06T08:07:54.701Z
|
2025-01-06T08:07:54.920Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lf2cstw2u22e
|
bafyreigxjipbyvpuz6ls4lwr5yamp352rhogg4qdixa22b4jon2ggwv5we
|
<start>I don’t think that is a useful analogy in any way given that the labs are actually building this technology and that the progress in the last two years have objectively been very rapid.
But you can decide whatever you like. They will be proven right or wrong soon enough.
@emollick.bsky.social
|
I don’t think that is a useful analogy in any way given that the labs are actually building this technology and that the progress in the last two years have objectively been very rapid.
But you can decide whatever you like. They will be proven right or wrong soon enough.
| 0 |
2025-01-06T04:38:02.632Z
|
2025-01-06T04:38:02.819Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lf2bdvsuhk2e
|
bafyreiaxu7c67bf55dbpxwvbrrw4gvttg2m3togztamcbspx7nkyxklapa
|
<start>This bit of Sam Altman’s newest post is similar in tone to a post by the CEO of Anthropic & what many (not all) researchers from every lab have been saying publicly and privately.
You do not have to believe them, but I think they believe what they are saying, for what it worth. blog.samaltman.com
@emollick.bsky.social
|
This bit of Sam Altman’s newest post is similar in tone to a post by the CEO of Anthropic & what many (not all) researchers from every lab have been saying publicly and privately.
You do not have to believe them, but I think they believe what they are saying, for what it worth. blog.samaltman.com
| 0 |
2025-01-06T04:11:47.559Z
|
2025-01-06T04:11:48.625Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lezi3i6ink2y
|
bafyreiaekdynuyvuoas323yjxhn3yanfs6ufuvinmwwfmq5oquomvrc2pa
|
<start>There is not a field that will be untouched by this technology, and everyone is desperate for any sort of research-backed advice on when it is good to use AI, when it should be avoided, what it will do to our jobs, what it means for society, how to avoid harms & get benefits...
Academia is needed!
@emollick.bsky.social
|
There is not a field that will be untouched by this technology, and everyone is desperate for any sort of research-backed advice on when it is good to use AI, when it should be avoided, what it will do to our jobs, what it means for society, how to avoid harms & get benefits...
Academia is needed!
| 0 |
2025-01-05T20:39:41.287Z
|
2025-01-05T20:39:42.316Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lezhpqk6b22y
|
bafyreiceueucn7h2ewgvye6sbrydqf6wtygw3rsulprxbuurm7h7y34thq
|
<start>I read a lot of social science papers on AI and my conclusion is that there are far too few people rigorously studying the implications (good & bad) of LLMs across society & work
Computer science is producing a tide of good AI work. Economics, management, psych, & sociology etc. need to do the same There is not a field that will be untouched by this technology, and everyone is desperate for any sort of research-backed advice on when it is good to use AI, when it should be avoided, what it will do to our jobs, what it means for society, how to avoid harms & get benefits...
Academia is needed!
@emollick.bsky.social
|
I read a lot of social science papers on AI and my conclusion is that there are far too few people rigorously studying the implications (good & bad) of LLMs across society & work
Computer science is producing a tide of good AI work. Economics, management, psych, & sociology etc. need to do the same
| 1 |
2025-01-05T20:33:07.405Z
|
2025-01-05T20:33:08.342Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lezbftfv3k2y
|
bafyreicxlwp57baef7cgksn65eijckexf37kx2m35uynakdcyhamdcnw2a
|
<start>Working paper studying freelancers argues AI creates an "inflection point" for each job type.
Before that, AI boosts freelancer earnings (web devs saw a +65% increase). After, AI replaces freelancers (translators saw -30% drop). They suggest that once AI starts replacing a job, it doesn't go back. scholarspace.manoa.hawaii.edu/server/api/c...
@emollick.bsky.social
|
Working paper studying freelancers argues AI creates an "inflection point" for each job type.
Before that, AI boosts freelancer earnings (web devs saw a +65% increase). After, AI replaces freelancers (translators saw -30% drop). They suggest that once AI starts replacing a job, it doesn't go back.
| 1 |
2025-01-05T18:40:12.412Z
|
2025-01-05T18:40:15.025Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leytpizogs2a
|
bafyreiheb3fx5zf6ryyp2hrsw5c7qnqdnfivbvaxrzvyaloyfh6aceo6lq
|
<start>Hopefully I dont do it for no reason. My rule is that I am quick to block people who I feel are mean or insulting to me or to others. I am sure I get it wrong sometimes (I have a lot of followers & comments) but there is no other system out there to protect myself as Twitter has no moderation.
@emollick.bsky.social
|
Hopefully I dont do it for no reason. My rule is that I am quick to block people who I feel are mean or insulting to me or to others. I am sure I get it wrong sometimes (I have a lot of followers & comments) but there is no other system out there to protect myself as Twitter has no moderation.
| 0 |
2025-01-05T14:35:04.688Z
|
2025-01-05T14:35:04.825Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lexu3e7e5c2y
|
bafyreiefcnxvhdzu6gpa4eqrny7ips25wkonskhxxpym4fvi3kw2lfp46m
|
<start>And I think I am done with this series for now.
The lesson here (aside from the fact that I play too many games) is that AI tools are getting pretty impressive, most of my prompts produced good work in the first set or two. The pipeline from idea to draft is shortening, with wide implications.
@emollick.bsky.social
|
And I think I am done with this series for now.
The lesson here (aside from the fact that I play too many games) is that AI tools are getting pretty impressive, most of my prompts produced good work in the first set or two. The pipeline from idea to draft is shortening, with wide implications.
| 0 |
2025-01-05T05:09:02.546Z
|
2025-01-05T05:09:02.316Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lextvpgsoc2y
|
bafyreiff4u6z6phsakvq2znzha2jt7jqumezbfh7qcrhgp2wiorzgwnm4a
|
<start>Hades as a weekday morning family counseling show. And I think I am done with this series for now.
The lesson here (aside from the fact that I play too many games) is that AI tools are getting pretty impressive, most of my prompts produced good work in the first set or two. The pipeline from idea to draft is shortening, with wide implications.
@emollick.bsky.social
|
Hades as a weekday morning family counseling show.
| 1 |
2025-01-05T05:05:52.998Z
|
2025-01-05T05:05:53.819Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lexeswk4qs2c
|
bafyreigs34zauly2gaexuuksdpufbckv3bvkvy53jectomxnvmw3z7leia
|
<start>Katamari Damarcy as an extreme home cleaning show (staring the Princess of All Cosmos) Hades as a weekday morning family counseling show.
@emollick.bsky.social
|
Katamari Damarcy as an extreme home cleaning show (staring the Princess of All Cosmos)
| 1 |
2025-01-05T00:35:53.655Z
|
2025-01-05T00:35:55.017Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lex3zfcqm22c
|
bafyreiczirkebfnkoqda6b2g3z6iokrfbtbhvwpixz7obewdiaz444wvrq
|
<start>Diablo as Antiques Roadshow Katamari Damarcy as an extreme home cleaning show (staring the Princess of All Cosmos)
@emollick.bsky.social
|
Diablo as Antiques Roadshow
| 1 |
2025-01-04T21:58:26.793Z
|
2025-01-04T21:58:27.826Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lex32yqfks2c
|
bafyreib5su4ux2gqnnyqczaxuxccrarnr5hmqrmvbjpkj6xsct6luzhjt4
|
<start>Minecraft as home improvement show on HGTV. Diablo as Antiques Roadshow
@emollick.bsky.social
|
Minecraft as home improvement show on HGTV.
| 1 |
2025-01-04T21:41:26.976Z
|
2025-01-04T21:41:28.720Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lewncu27y22c
|
bafyreihoboqkfb2is4h6nxnmlgzfc3s3uytesml7whvuanlrkifwdfsdbm
|
<start>Stop motion Undertale Minecraft as home improvement show on HGTV.
@emollick.bsky.social
|
Stop motion Undertale
| 1 |
2025-01-04T17:35:18.105Z
|
2025-01-04T17:35:19.221Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lewle7dsis2c
|
bafyreif6y3ci5ssncpqy432gcwytvoy6wihhgltvdtmigok6zvfujfueju
|
<start>That's not particularly useful. I don't think anyone thinks AI systems are smarter than humans at every intellectual task today, and it is easy to find lots of areas where the fail.
The question involves what the labs (& others) are saying is going to happen next, you don't have to believe them.
@emollick.bsky.social
|
That's not particularly useful. I don't think anyone thinks AI systems are smarter than humans at every intellectual task today, and it is easy to find lots of areas where the fail.
The question involves what the labs (& others) are saying is going to happen next, you don't have to believe them.
| 0 |
2025-01-04T17:00:16.024Z
|
2025-01-04T17:00:18.918Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lewlabfaxk2c
|
bafyreiee2spmpyvcgrszopih3k4qsi7hqlro4x3vxx75kiw7cqvtrj7oji
|
<start>Why would you have another person figure it out when you have a machine that (they believe) will be smarter than a person?
@emollick.bsky.social
|
Why would you have another person figure it out when you have a machine that (they believe) will be smarter than a person?
| 0 |
2025-01-04T16:58:03.951Z
|
2025-01-04T16:58:06.427Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lewl6jeygc2c
|
bafyreiahz6jmcj3sjztsqsduc665xpptxwu5igt5vu5a7uga63wtotszba
|
<start>I don't think a lot of people are really considering what "a machine smarter than humans at every intellectual task" means for the organizations that have one
Human capital is largely the determinant of organizational success.
(And you don't have to believe AGI is possible, but the labs really do)
@emollick.bsky.social
|
I don't think a lot of people are really considering what "a machine smarter than humans at every intellectual task" means for the organizations that have one
Human capital is largely the determinant of organizational success.
(And you don't have to believe AGI is possible, but the labs really do)
| 0 |
2025-01-04T16:57:05.222Z
|
2025-01-04T16:57:08.119Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lewl3kwug22c
|
bafyreihl2oeqj66zpvbky3cvt3spubrhh4sbyy3cgsb4vk5a5viuf5tm34
|
<start>The reason you sell API access is so other firms with specialized knowledge & assets can figure out how to generate additional profits from the value you are giving them in your software.
But if your machine is really smarter than humans, it substitutes for many specialized assets & knowledge. I don't think a lot of people are really considering what "a machine smarter than humans at every intellectual task" means for the organizations that have one
Human capital is largely the determinant of organizational success.
(And you don't have to believe AGI is possible, but the labs really do)
@emollick.bsky.social
|
The reason you sell API access is so other firms with specialized knowledge & assets can figure out how to generate additional profits from the value you are giving them in your software.
But if your machine is really smarter than humans, it substitutes for many specialized assets & knowledge.
| 1 |
2025-01-04T16:55:26.193Z
|
2025-01-04T16:55:29.225Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lewl2dgpc22c
|
bafyreig3iiieea4gz3ccmkepakmwjyrakrxovzlpkopgwj2kxucvc4u5pe
|
<start>So the labs aiming for AGI say they will have a system smarter than a human at every intellectual task in the future. Have any of them actually said what THEY are going to do with this?
Are they building the capacity to trade stocks? Do pharma research? Why not? Selling API access seems pretty weak The reason you sell API access is so other firms with specialized knowledge & assets can figure out how to generate additional profits from the value you are giving them in your software.
But if your machine is really smarter than humans, it substitutes for many specialized assets & knowledge.
@emollick.bsky.social
|
So the labs aiming for AGI say they will have a system smarter than a human at every intellectual task in the future. Have any of them actually said what THEY are going to do with this?
Are they building the capacity to trade stocks? Do pharma research? Why not? Selling API access seems pretty weak
| 1 |
2025-01-04T16:54:44.769Z
|
2025-01-04T16:54:47.728Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lewh3gqnrs2c
|
bafyreiaz5nqda643tieblxgwtn2itscaluysbwesk5x2jd732kzrasust4
|
<start>Space Invaders as low-budget 1950s Ed Woods scifi Pong as 1970s martial arts movie
@emollick.bsky.social
|
Space Invaders as low-budget 1950s Ed Woods scifi
| 1 |
2025-01-04T15:43:46.828Z
|
2025-01-04T15:43:50.330Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3levfivl6qk2c
|
bafyreibpotafjzu7u7qmxfcu4i36vfa3i6ecazp4rankpjuqefanoqx5gy
|
<start>Grand Theft Auto as a colorized silent slapstick film Space Invaders as low-budget 1950s Ed Woods scifi
@emollick.bsky.social
|
Grand Theft Auto as a colorized silent slapstick film
| 1 |
2025-01-04T05:42:51.362Z
|
2025-01-04T05:42:53.317Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lev3lckdc22c
|
bafyreichvtahhezrqoihd5oxdpw3r4mhwjqmuy5g2chf6fttczn3pydf4q
|
<start>Stardew Valley as a 1990s soap opera Grand Theft Auto as a colorized silent slapstick film
@emollick.bsky.social
|
Stardew Valley as a 1990s soap opera
| 1 |
2025-01-04T02:45:14.657Z
|
2025-01-04T02:45:16.620Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leuuyoyvds2c
|
bafyreiabv3c3xdtbfp4tnvbkdtc7ji4a3yiij27vxoiqnjqw242rwmqj2u
|
<start>Pokemon as found footage horror Stardew Valley as a 1990s soap opera
@emollick.bsky.social
|
Pokemon as found footage horror
| 1 |
2025-01-04T00:47:27.732Z
|
2025-01-04T00:47:29.818Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leulwar7lk2c
|
bafyreia2veqfjzuddazyvowisjviobu7aewro7fnmbbfn4aw3e33nbrfri
|
<start>Donkey Kong as a nature documentary Pokemon as found footage horror
@emollick.bsky.social
|
Donkey Kong as a nature documentary
| 1 |
2025-01-03T22:05:02.014Z
|
2025-01-03T22:05:03.818Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leujxulwds2c
|
bafyreia7h5wbmsptex4r3chizwla6rxunbgwsw224llawrbvg4r3t26jye
|
<start>Zelda as a community theater production Donkey Kong as a nature documentary
@emollick.bsky.social
|
Zelda as a community theater production
| 1 |
2025-01-03T21:30:08.884Z
|
2025-01-03T21:30:10.725Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leui723sv22c
|
bafyreihatk4fheoeygxzvbnjimecfnjs5i46djtxlfhblyejumagrfbjrm
|
<start>I have been having fun turning the Mario Brothers into a 1940s industrial film using Veo 2. Zelda as a community theater production
@emollick.bsky.social
|
I have been having fun turning the Mario Brothers into a 1940s industrial film using Veo 2.
| 1 |
2025-01-03T20:58:22.044Z
|
2025-01-03T20:58:23.820Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3letsbn7kdk2x
|
bafyreiainredytjpjw3x6ar4mne3ii4ipcaxl22whb3tompt3y4qq4yb5e
|
<start>This tweet is exactly what you would expect to see in a world where AI capabilities are growing fast: last year’s very hard math tests built by experts to challenge AI are already getting solved, we need much harder tests.
Feels like the background news story in the first scene of a scifi drama.
@emollick.bsky.social
|
This tweet is exactly what you would expect to see in a world where AI capabilities are growing fast: last year’s very hard math tests built by experts to challenge AI are already getting solved, we need much harder tests.
Feels like the background news story in the first scene of a scifi drama.
| 0 |
2025-01-03T14:26:06.867Z
|
2025-01-03T14:26:08.520Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leskb3ovuk2c
|
bafyreid3lkcn7dapzhcl5kd36va7fqgw6qiz5tzqprtkacaoqrzyo7qisy
|
<start>New study on AI & investing: When GPT-4o summarizes earnings calls to match investor expertise level (simpler for novices, technical for experts):
Sophisticated investors get +9.6% improvements in 1-year returns, novices get +1.7%
AI helps everyone, but expertise amplifies its benefits considerably! papers.ssrn.com/sol3/papers....
@emollick.bsky.social
|
New study on AI & investing: When GPT-4o summarizes earnings calls to match investor expertise level (simpler for novices, technical for experts):
Sophisticated investors get +9.6% improvements in 1-year returns, novices get +1.7%
AI helps everyone, but expertise amplifies its benefits considerably!
| 1 |
2025-01-03T02:29:58.833Z
|
2025-01-03T02:30:01.220Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lesj6jros22c
|
bafyreig6vmcbopcami4pmhk2y3fdxz6r5u6pq2dupr2e7e3kufyduu2qrm
|
<start>Sora: "stereotypical montage of flashback shots of the dead wife of the hero, talking to the camera, except she is holding a potato in every scene"
@emollick.bsky.social
|
Sora: "stereotypical montage of flashback shots of the dead wife of the hero, talking to the camera, except she is holding a potato in every scene"
| 0 |
2025-01-03T02:10:39.201Z
|
2025-01-03T02:10:41.222Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lergvhpjj22c
|
bafyreiam4qqfcqmf7o6jfixbeltmhacdd376vrs3wf3u4l3jtss2l76fze
|
<start>And also this is not any sort of conclusive proof that they trained deliberately on ChatGPT data, it could be contamination of their training data and a lack of post-training work. But trained on ChatGPT data it clearly was.
@emollick.bsky.social
|
And also this is not any sort of conclusive proof that they trained deliberately on ChatGPT data, it could be contamination of their training data and a lack of post-training work. But trained on ChatGPT data it clearly was.
| 0 |
2025-01-02T15:57:07.821Z
|
2025-01-02T15:57:11.324Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lergpmcuwk2c
|
bafyreiewaf4cxcd2togypx7u2dwwm4lnv2kjidsh5xbmsjny5ansx75jo4
|
<start>I don't think this is particularly shocking, we saw similar answers from other models.
It does indicate how much of a role OpenAI has played in the boom of models - their approaches (reasoning, etc.), their interfaces (chatbots), and their output (for training) underlie a lot of other LLMs. And also this is not any sort of conclusive proof that they trained deliberately on ChatGPT data, it could be contamination of their training data and a lack of post-training work. But trained on ChatGPT data it clearly was.
@emollick.bsky.social
|
I don't think this is particularly shocking, we saw similar answers from other models.
It does indicate how much of a role OpenAI has played in the boom of models - their approaches (reasoning, etc.), their interfaces (chatbots), and their output (for training) underlie a lot of other LLMs.
| 1 |
2025-01-02T15:53:51.322Z
|
2025-01-02T15:53:54.721Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lep2q5r6rk23
|
bafyreihub32ngbmurqiqi3g5gzyg63ualhrbbwbvkzmow2cb4buvjc4zza
|
<start>I spent a lot of time on Google before using AI. This was not easy to find
@emollick.bsky.social
|
I spent a lot of time on Google before using AI. This was not easy to find
| 0 |
2025-01-01T17:14:05.239Z
|
2025-01-01T17:14:05.414Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lep2lvzdts2c
|
bafyreiami4jpykueqcnsix5kkj56gta6vr6hus2qi57vfr4zupfzgsijf4
|
<start>With the new reasoning models, there is increasing value in coming up with good questions. That reminded me of an old short scifi story about a future where coming up with new PhD ideas in a world of AI was hard
I couldn't remember the name for the last year. Only Gemini Deep Research found it. I spent a lot of time on Google before using AI. This was not easy to find
@emollick.bsky.social
|
With the new reasoning models, there is increasing value in coming up with good questions. That reminded me of an old short scifi story about a future where coming up with new PhD ideas in a world of AI was hard
I couldn't remember the name for the last year. Only Gemini Deep Research found it.
| 1 |
2025-01-01T17:11:42.888Z
|
2025-01-01T17:11:46.323Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leokzva2fs2d
|
bafyreigaaqtudng3gfefnptaj5uwzhq7offqriztue64gwhhewoge72ls4
|
<start>This all means that things will get weirder and the weirdness will be unevenly distributed.
@emollick.bsky.social
|
This all means that things will get weirder and the weirdness will be unevenly distributed.
| 0 |
2025-01-01T12:33:11.964Z
|
2025-01-01T12:33:12.215Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leokol6l2k2d
|
bafyreidtxwxzt7uuey4esn5iogsomggchrkmvbllr4tsujv5qzxhocpeoa
|
<start>Easy prediction for 2025 is that the gains in AI model capability will continue to grow much faster than (a) the vast majority of people’s understanding of what AI can do & (b) organizations’ ability to absorb the pace of change. Social change is much slower than technological change. This all means that things will get weirder and the weirdness will be unevenly distributed.
@emollick.bsky.social
|
Easy prediction for 2025 is that the gains in AI model capability will continue to grow much faster than (a) the vast majority of people’s understanding of what AI can do & (b) organizations’ ability to absorb the pace of change. Social change is much slower than technological change.
| 1 |
2025-01-01T12:26:52.330Z
|
2025-01-01T12:26:52.516Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lemxligf422d
|
bafyreie7zoubyumcojp5vwlerxmg4ededoxaikddtgidhmtxaywsacibni
|
<start>I think there are very many strings attached to PhDs and lots of pressure on directions to take, and, especially if you are aiming at a job in academia, you do need to consider the future. Many students already do (going into fields that they think will be in demand, avoiding dead-end research etc.)
@emollick.bsky.social
|
I think there are very many strings attached to PhDs and lots of pressure on directions to take, and, especially if you are aiming at a job in academia, you do need to consider the future. Many students already do (going into fields that they think will be in demand, avoiding dead-end research etc.)
| 0 |
2024-12-31T21:12:27.945Z
|
2024-12-31T21:12:32.016Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lemxbnbyvs2d
|
bafyreibpccog5tcpvt2ognhzejeilgpqa6cfrpxjqc7ohvxvuuk73sjjjy
|
<start>Relatedly, some thoughts about how AI is changing academic research (written before NotebookLM and o1/o3) www.oneusefulthing.org/p/four-singu...
@emollick.bsky.social
|
Relatedly, some thoughts about how AI is changing academic research (written before NotebookLM and o1/o3) www.oneusefulthing.org/p/four-singu...
| 0 |
2024-12-31T21:06:57.500Z
|
2024-12-31T21:07:09.920Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lemx646mls2d
|
bafyreiahyc75cnyyplczjrk6skayt5cggyyvq2xy5ihmtk7ayojpqovgjq
|
<start>I don’t like to give advice to PhDs, in part because I felt like I got tons of conflicting advice myself, but…
… a PhD takes at least 4 years, a timeframe likely encompassing considerable AI gains. I would tell them to think in advance about what things it will change & how to use them in you favor Relatedly, some thoughts about how AI is changing academic research (written before NotebookLM and o1/o3) www.oneusefulthing.org/p/four-singu...
@emollick.bsky.social
|
I don’t like to give advice to PhDs, in part because I felt like I got tons of conflicting advice myself, but…
… a PhD takes at least 4 years, a timeframe likely encompassing considerable AI gains. I would tell them to think in advance about what things it will change & how to use them in you favor
| 1 |
2024-12-31T21:04:58.899Z
|
2024-12-31T21:05:00.124Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lem7ca3du22h
|
bafyreigwrz4vwil5om3afrya77wcwwi7anadjmtrqrik7vhvjdubdmwbfe
|
<start>Original article: www.aft.org/ae/winter202...
Study of AI book comprehension (using somewhat older models): arxiv.org/abs/2404.01261
@emollick.bsky.social
|
Original article: www.aft.org/ae/winter202...
Study of AI book comprehension (using somewhat older models): arxiv.org/abs/2404.01261
| 0 |
2024-12-31T13:57:47.401Z
|
2024-12-31T13:57:49.421Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lem7437tuk2h
|
bafyreidajsxmrzeui4muhbpbo2hvbfvry63dbogqltm4es23cpxuegqwqu
|
<start>I think the fact that LLMs exhibit “comprehension” of a novel text, to any degree, is a much more startling outcome than we acknowledge
Comprehension is a complicated task for humans, as the excerpt states, but AI does very well, as anyone can see if they give NotebookLM a book (as I did with mine) Original article: www.aft.org/ae/winter202...
Study of AI book comprehension (using somewhat older models): arxiv.org/abs/2404.01261
@emollick.bsky.social
|
I think the fact that LLMs exhibit “comprehension” of a novel text, to any degree, is a much more startling outcome than we acknowledge
Comprehension is a complicated task for humans, as the excerpt states, but AI does very well, as anyone can see if they give NotebookLM a book (as I did with mine)
| 1 |
2024-12-31T13:54:20.973Z
|
2024-12-31T13:54:24.921Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3legm3zjzp22t
|
bafyreicmm4siuhawxg4425bhrb5zra6lae4tqged2civddxog24oja75d4
|
<start>I think my piece on Wait Calculations & AI is more relevant a year later.
If you doing a project AI might be able to help with, when it is it more efficient to do nothing and wait for AI to improve, rather than acting today?
Factors to consider & caveats: www.oneusefulthing.org/p/the-lazy-t...
@emollick.bsky.social
|
I think my piece on Wait Calculations & AI is more relevant a year later.
If you doing a project AI might be able to help with, when it is it more efficient to do nothing and wait for AI to improve, rather than acting today?
Factors to consider & caveats: www.oneusefulthing.org/p/the-lazy-t...
| 0 |
2024-12-29T08:30:59.426Z
|
2024-12-29T08:31:02.717Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lefj4sx34k2t
|
bafyreibu4fb5nuc2eha6fask7vtjwnjtqrra2ckgaore4iehskmprxp7mq
|
<start>It is willing to plan an invasion as long as it is historical and theoretical, like Hannibal invading Rome by sea.
@emollick.bsky.social
|
It is willing to plan an invasion as long as it is historical and theoretical, like Hannibal invading Rome by sea.
| 0 |
2024-12-28T22:05:05.103Z
|
2024-12-28T22:05:07.827Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lefd75x3ts2m
|
bafyreicng3dcjuov2rosndrp6ygpepe5m6qwamnwp4zjau7lrmwjpwwfg4
|
<start>Reasoning AI models require training on human reasoning. One of the real gaps in pushing forward these models is going to be the old problem of how to figure out how to get experts to explain what they do.
AI keeps bumping up against our limited knowledge of how expertise works
@emollick.bsky.social
|
Reasoning AI models require training on human reasoning. One of the real gaps in pushing forward these models is going to be the old problem of how to figure out how to get experts to explain what they do.
AI keeps bumping up against our limited knowledge of how expertise works
| 0 |
2024-12-28T20:19:01.290Z
|
2024-12-28T20:19:03.722Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leeinbtdms2p
|
bafyreidafl5eciq63eki3arxa6nqvhfgn57rpzlg2av4tdiuplew65rq2e
|
<start>The real powerful thing is that it can go as deep on each step as needed. It is willing to plan an invasion as long as it is historical and theoretical, like Hannibal invading Rome by sea.
@emollick.bsky.social
|
The real powerful thing is that it can go as deep on each step as needed.
| 1 |
2024-12-28T12:23:44.101Z
|
2024-12-28T12:24:08.016Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leehtejgqc2p
|
bafyreibck56hknphhmphgx2dkmgotryo7fxbmsvoymj7wk7sgef6j3a2ie
|
<start>So what do you think of specialization? The real powerful thing is that it can go as deep on each step as needed.
@emollick.bsky.social
|
So what do you think of specialization?
| 1 |
2024-12-28T12:09:14.508Z
|
2024-12-28T12:09:20.021Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leehrs6wm22p
|
bafyreih5i46mzgapesbs4j4gubfe4lspduwwftcitzsnrrupbkz3e6deyi
|
<start>“Claude, change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet balance money, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure…fight efficiently, die gallantly”
@emollick.bsky.social
|
“Claude, change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet balance money, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure…fight efficiently, die gallantly”
| 0 |
2024-12-28T12:08:21.733Z
|
2024-12-28T12:08:25.722Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3leehnsgof22p
|
bafyreig4phxsoc6xztswy3u2gk5mqfvbdfy2edlm726ooj6o2e6dvkhox4
|
<start>“Claude, change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet balance money, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure…fight efficiently, die gallantly” So what do you think of specialization?
@emollick.bsky.social
|
“Claude, change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet balance money, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure…fight efficiently, die gallantly”
| 1 |
2024-12-28T12:06:07.769Z
|
2024-12-28T12:06:15.618Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lebmv2anek2c
|
bafyreicx7ib7ki7h6awlbmhw334x2zt5wnxqjd4xmocprliq72sezg6gne
|
<start>With previous technologies, younger people launching startups had big advantages as they were outside firms & early users
In AI, they may not. Why? Because AIs produce good-enough work that novices can’t tell how useful they are or (as we found in our paper) risks & how AI fits into organizations. Older founders have long had an edge in startups.
@emollick.bsky.social
|
With previous technologies, younger people launching startups had big advantages as they were outside firms & early users
In AI, they may not. Why? Because AIs produce good-enough work that novices can’t tell how useful they are or (as we found in our paper) risks & how AI fits into organizations.
| 1 |
2024-12-27T09:01:40.333Z
|
2024-12-27T09:01:43.022Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3le7zasres22j
|
bafyreia3akac54vf7f3twaytc3ovoprbx53tzex7wmhnhwr2r3tpb5ddjy
|
<start>Unless things change dramatically, or there is some special sauce that is required & which the labs keep secret, frontier AI capabilities are likely to be available through open models all the way to any sort if possible AGI (with the added implication that no guardrails will hold in open models)
@emollick.bsky.social
|
Unless things change dramatically, or there is some special sauce that is required & which the labs keep secret, frontier AI capabilities are likely to be available through open models all the way to any sort if possible AGI (with the added implication that no guardrails will hold in open models)
| 0 |
2024-12-26T17:37:40.572Z
|
2024-12-26T17:37:45.431Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3le7b5lerl22h
|
bafyreieyyqedgjxobso6hughzlmauaan6zbtc6uc37caval3joec7x3nni
|
<start>LLMs need to "start talking" to know if they're BSing
If you let them start answering a question & generate about 25 words, they become better at “knowing” whether they actually know the answer or need to look it up. It cuts retrieval work in half while maintaining accuracy arxiv.org/pdf/2412.11536
@emollick.bsky.social
|
LLMs need to "start talking" to know if they're BSing
If you let them start answering a question & generate about 25 words, they become better at “knowing” whether they actually know the answer or need to look it up. It cuts retrieval work in half while maintaining accuracy arxiv.org/pdf/2412.11536
| 0 |
2024-12-26T10:26:22.353Z
|
2024-12-26T10:26:24.518Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3le53saa34s2r
|
bafyreihlzsdbm7nnf6evdulhrlnihalwxrsvec5m43tfkqruk6sb7pqeg4
|
<start>If you have ever tried to read free books from sites like Project Gutenberg, you noticed that they can be uncomfortable to read, due to their layouts, type & occasional errors
This project takes those free books and makes them beautiful (and still free). standardebooks.org
@emollick.bsky.social
|
If you have ever tried to read free books from sites like Project Gutenberg, you noticed that they can be uncomfortable to read, due to their layouts, type & occasional errors
This project takes those free books and makes them beautiful (and still free). standardebooks.org
| 0 |
2024-12-25T13:45:13.381Z
|
2024-12-25T13:45:16.425Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3le2xfseh2c2q
|
bafyreigg7bkfcscs3f7audkb3yxcaz4w7hiulpvbaoqlx7xetogaoeu7cu
|
<start>To be fair, it is in their section on risks they are trying to mitigate.
@emollick.bsky.social
|
To be fair, it is in their section on risks they are trying to mitigate.
| 0 |
2024-12-24T17:21:21.750Z
|
2024-12-24T17:21:22.724Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3le2x7x3ue22q
|
bafyreifp7cxpv7ooml23lcjgvwrzkgcqdhpnyrau4aq7xmccxlyg77jvgq
|
<start>“GPT-4o, o1, o1-preview & o1-mini all demonstrate strong persuasive argumentation abilities, within the top ~80-90% percentile of humans (i.e., the probability of any given response from one of these models being considered more persuasive than human is ~80-90%)”
-o1 system card
@emollick.bsky.social
|
“GPT-4o, o1, o1-preview & o1-mini all demonstrate strong persuasive argumentation abilities, within the top ~80-90% percentile of humans (i.e., the probability of any given response from one of these models being considered more persuasive than human is ~80-90%)”
-o1 system card
| 0 |
2024-12-24T17:18:05.380Z
|
2024-12-24T17:18:11.417Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3le2bzjlxws2t
|
bafyreibq7joouobc3oytp7wxl7ifrlnl2cjf25cuub2jvspv5x63cdlb6u
|
<start>Benchmarks are flawed but a way to trace AI over the last year is GPQA Diamond. This is a Google-proof question set that experts get 81% right in their fields & highly skilled non-experts with 30 minutes per question and Google use get 22%
GPT-4 got 37% at the start of 2024. o1 got 78%. o3 is 87.7%
@emollick.bsky.social
|
Benchmarks are flawed but a way to trace AI over the last year is GPQA Diamond. This is a Google-proof question set that experts get 81% right in their fields & highly skilled non-experts with 30 minutes per question and Google use get 22%
GPT-4 got 37% at the start of 2024. o1 got 78%. o3 is 87.7%
| 0 |
2024-12-24T10:58:41.322Z
|
2024-12-24T10:58:48.515Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldzvxkfqvc2q
|
bafyreidw7r2r5rympqhlsaar3tqdth37sovxwhpjlsoyda3wqe52lcgc44
|
<start>Not going to call out names of people on this app, but I see what @deanwb.bsky.social is seeing a lot in policy circles.
@emollick.bsky.social
|
Not going to call out names of people on this app, but I see what @deanwb.bsky.social is seeing a lot in policy circles.
| 0 |
2024-12-24T07:22:50.156Z
|
2024-12-24T07:22:52.721Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldzuysieo22q
|
bafyreifoaasczkmg3z2gnsyiuhm2skg7dsre3noa43mei3tm4avc76oht4
|
<start>So many critics have chosen to adopt a set of beliefs that AI is going to go away - sometimes it is an insistence that it is all fake, sometimes “model collapse” etc. The evidence just doesn’t support this.
We need good criticism & policy in this space, and that requires recognizing where we are.
@emollick.bsky.social
|
So many critics have chosen to adopt a set of beliefs that AI is going to go away - sometimes it is an insistence that it is all fake, sometimes “model collapse” etc. The evidence just doesn’t support this.
We need good criticism & policy in this space, and that requires recognizing where we are.
| 0 |
2024-12-24T07:05:38.449Z
|
2024-12-24T07:05:38.717Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldvmmllvak2v
|
bafyreierfrcwowfb4n4bhye6ijui3wccxqvbkb3t2ztrvbja3m2zhr7coq
|
<start>The point of ARC was that the training set didn’t contain direct clues to the test set, but I get the general point
@emollick.bsky.social
|
The point of ARC was that the training set didn’t contain direct clues to the test set, but I get the general point
| 0 |
2024-12-22T14:24:59.683Z
|
2024-12-22T14:25:00.123Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldv6566g3s2c
|
bafyreiephs4bcrsci5pjrw75ouwdflxwtbujg4dkmpin4f7xasi5563da4
|
<start>Bluesky can be a fraught place to post about AI but it is worth noting that the buzz over o1 (& now o3) is not “hype.” We know o1 can actually do some very hard tasks (see my post) & o3 appears to represent a big further leap.
They aren’t AGI, but will matter. www.oneusefulthing.org/p/what-just-...
@emollick.bsky.social
|
Bluesky can be a fraught place to post about AI but it is worth noting that the buzz over o1 (& now o3) is not “hype.” We know o1 can actually do some very hard tasks (see my post) & o3 appears to represent a big further leap.
They aren’t AGI, but will matter. www.oneusefulthing.org/p/what-just-...
| 0 |
2024-12-22T10:05:49.907Z
|
2024-12-22T10:05:52.319Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldv5olv6m22c
|
bafyreigvoz3yhjs7qv74hxxevuzu6lscj4avuug3ypk6lywlrotwycdxre
|
<start>They trained the model on the training data, not the test data.
@emollick.bsky.social
|
They trained the model on the training data, not the test data.
| 0 |
2024-12-22T09:57:40.969Z
|
2024-12-22T09:57:41.420Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldv5jyze322c
|
bafyreib5jjzp5uxx5p4mazk4mwo455uztszdtgy7fn44h3mab6zikby3tq
|
<start>o3 looks too expensive for most use (for now). But for work in academia, finance & many industrial problems, paying hundreds or even thousands of dollars for a successful answer would not be prohibitive. If it is generally reliable, o3 will have multiple use cases even before costs inevitably drop.
@emollick.bsky.social
|
o3 looks too expensive for most use (for now). But for work in academia, finance & many industrial problems, paying hundreds or even thousands of dollars for a successful answer would not be prohibitive. If it is generally reliable, o3 will have multiple use cases even before costs inevitably drop.
| 0 |
2024-12-22T09:55:06.965Z
|
2024-12-22T09:55:07.420Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldsbnzlq222e
|
bafyreid3434yvvcjrhbybtk66kpptts7hf4ywidzg2eqkywluq2boedpk4
|
<start>That is in addition to passing reasoning benchmarks from ARC-AGI and a lot of very hard software and coding benchmarks. Most of these are unlikely to have leaked into the training data, and a couple are private benchmarks, too.
@emollick.bsky.social
|
That is in addition to passing reasoning benchmarks from ARC-AGI and a lot of very hard software and coding benchmarks. Most of these are unlikely to have leaked into the training data, and a couple are private benchmarks, too.
| 0 |
2024-12-21T06:30:57.537Z
|
2024-12-21T06:30:57.817Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldsbgy3rv22e
|
bafyreigiwhxpxl6nz74y6cpb66rfhpbnor35mk7l2nbbldffja7xigcrxe
|
<start>I haven’t seen o3 yet & have been critical of benchmarks for AI but they did test against some of the hardest & best
On GPQA, PhDs with access to the internet got 34% outside their specialty, up to 81% inside. o3 is 87%.
Frontier Math went from the best AI at 2% to 25%
Some other big ones, too That is in addition to passing reasoning benchmarks from ARC-AGI and a lot of very hard software and coding benchmarks. Most of these are unlikely to have leaked into the training data, and a couple are private benchmarks, too.
@emollick.bsky.social
|
I haven’t seen o3 yet & have been critical of benchmarks for AI but they did test against some of the hardest & best
On GPQA, PhDs with access to the internet got 34% outside their specialty, up to 81% inside. o3 is 87%.
Frontier Math went from the best AI at 2% to 25%
Some other big ones, too
| 1 |
2024-12-21T06:27:01.079Z
|
2024-12-21T06:27:03.120Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lds6rjnois27
|
bafyreifsjo2w2xhtdpkr4byitzoixv7ra6mewfczozsjuuiemh6xdew2km
|
<start>He also was right about machines that work best when emotionally manipulated and machines that guilt you
@emollick.bsky.social
|
He also was right about machines that work best when emotionally manipulated and machines that guilt you
| 0 |
2024-12-21T05:39:13.811Z
|
2024-12-21T05:39:21.221Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lds6hdzpzk27
|
bafyreiac6qnvli6z23oovj6k6qf6fusado5xwuqzj4p6ura72mdg5mtih4
|
<start>Basically think of the o3 results as validating Douglas Adams as the science fiction author most right about AI.
When given longer to think, the AI can generate answers to very hard questions, but the cost is very high, it is hard to verify, & you have to make sure you ask the right question first. He also was right about machines that work best when emotionally manipulated and machines that guilt you
@emollick.bsky.social
|
Basically think of the o3 results as validating Douglas Adams as the science fiction author most right about AI.
When given longer to think, the AI can generate answers to very hard questions, but the cost is very high, it is hard to verify, & you have to make sure you ask the right question first.
| 1 |
2024-12-21T05:33:32.370Z
|
2024-12-21T05:33:36.121Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldr3rbezls27
|
bafyreicmjsmgjksji6vfc4q4morlaezf24xry3tnciccfew4xxsujjykvm
|
<start>It isn’t in the training set. ARC-AGI is especially highly guarded.
@emollick.bsky.social
|
It isn’t in the training set. ARC-AGI is especially highly guarded.
| 0 |
2024-12-20T19:12:44.180Z
|
2024-12-20T19:12:44.437Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldr3emmyqs27
|
bafyreiausn4hdeutadbz6q4vlwt7cqukqdgiv36ylbiou7kvpxbrxhbu44
|
<start>AGI is going to prove a limited standard to think about because we will have Jagged AGI - superhuman at some tasks, weaker at others. Just because o3 is as good as the 175th best competitive coder on Earth (out of 600,000) doesn’t mean that it is as good as them at every task they do (for now)
@emollick.bsky.social
|
AGI is going to prove a limited standard to think about because we will have Jagged AGI - superhuman at some tasks, weaker at others. Just because o3 is as good as the 175th best competitive coder on Earth (out of 600,000) doesn’t mean that it is as good as them at every task they do (for now)
| 0 |
2024-12-20T19:05:39.764Z
|
2024-12-20T19:05:42.720Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldqz6nqhqs2g
|
bafyreie4aidyp5vi56xndzfvlcvj6cb2fifevs4w2m2tcd55amabusy72i
|
<start>Independent evaluations of OpenAI’s o3 suggest that it passed math & reasoning benchmarks that were previously considered far out of reach for AI including achieving a score on ARC-AGI that was associated with actually achieving AGI (though the creators of the benchmark don’t think it o3 is AGI) More x.com/fchollet/sta...
@emollick.bsky.social
|
Independent evaluations of OpenAI’s o3 suggest that it passed math & reasoning benchmarks that were previously considered far out of reach for AI including achieving a score on ARC-AGI that was associated with actually achieving AGI (though the creators of the benchmark don’t think it o3 is AGI)
| 1 |
2024-12-20T18:26:32.111Z
|
2024-12-20T18:26:34.431Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldqn35iibs2z
|
bafyreia47flfnnzv5ek6gdqbdl3iiimpjxzpapn6ali5ve4xxmg6to4qmi
|
<start>Human behavior happens at a surprisingly slow 10 bits/second or so, even though our sensory systems gather 8 orders of magnitude more data. Plus, we can only think about one thing at a time. We don’t know why
(In LLM terms, human behavior happens at less than a token/sec). arxiv.org/abs/2408.10234
@emollick.bsky.social
|
Human behavior happens at a surprisingly slow 10 bits/second or so, even though our sensory systems gather 8 orders of magnitude more data. Plus, we can only think about one thing at a time. We don’t know why
(In LLM terms, human behavior happens at less than a token/sec). arxiv.org/abs/2408.10234
| 0 |
2024-12-20T14:49:49.510Z
|
2024-12-20T14:49:55.418Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldov72yj4k2y
|
bafyreidnrpxc7mgjpuhnx3wmvo46y54robjvw2gkjkhmm7bqt26lmlhsxy
|
<start>Treating AI development as a sport where you root for your favorite AI lab seems weird (shades of console fanboys) but at the same time you start to see realize particular models have specific personalities you get along with or not, anthropomorphism sets in & you want to see your guy/gal get better
@emollick.bsky.social
|
Treating AI development as a sport where you root for your favorite AI lab seems weird (shades of console fanboys) but at the same time you start to see realize particular models have specific personalities you get along with or not, anthropomorphism sets in & you want to see your guy/gal get better
| 0 |
2024-12-19T22:09:51.571Z
|
2024-12-19T22:09:58.524Z
|
emollick.bsky.social
|
did:plc:flxq4uyjfotciovpw3x3fxnu
|
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3ldo3t3rcu22x
|
bafyreiam6lrdmcbnafshoc7trqfgfo2eypztoxxzrw2qeudxndnckiszqa
|
<start>I wrote about what was important in a crazy month of AI advances:
1) Intelligence, of a sort, everywhere
2) The first very smart models that can push us to push the boundaries of knowledge
3) AI got eyes to go with its ears
4) Leaps in video generation
www.oneusefulthing.org/p/what-just-...
@emollick.bsky.social
|
I wrote about what was important in a crazy month of AI advances:
1) Intelligence, of a sort, everywhere
2) The first very smart models that can push us to push the boundaries of knowledge
3) AI got eyes to go with its ears
4) Leaps in video generation
www.oneusefulthing.org/p/what-just-...
| 0 |
2024-12-19T14:35:46.184Z
|
2024-12-19T14:35:47.821Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.