handle
stringclasses
20 values
did
stringclasses
20 values
uri
stringlengths
56
70
cid
stringlengths
59
59
text
stringlengths
60
1.48k
originalText
stringlengths
1
1.4k
replyCount
int64
0
5
createdAt
stringdate
2009-10-15 18:56:15
2025-06-21 21:20:01
indexedAt
stringdate
2009-10-15 18:56:15
2025-06-21 21:20:04
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcqspkyrck2a
bafyreicqhqz5ogrfkgjpkmiwfcxdmgu2w6r74bwt6thvm7wjh2my3etdim
<start>I know a decent number of large firms dedicating substantial effort to R&D in applications of LLMs. This isn’t about process automation. @emollick.bsky.social
I know a decent number of large firms dedicating substantial effort to R&D in applications of LLMs. This isn’t about process automation.
0
2024-12-07T23:05:31.851Z
2024-12-07T23:05:32.315Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcqrj2zwzs2a
bafyreibfvqmfuiv6f6ooyonyucjzxhfqi6tcahoid3urjvmtb3jqeqhbfu
<start>I speak to big companies about their GenAI work many times a week. Seeing tons of experimentation with high end work, within banks: tools for analysts & private client support are already being rolled out, lots of experiments with scaling advisory efforts. It isn’t traditional workforce automation. @emollick.bsky.social
I speak to big companies about their GenAI work many times a week. Seeing tons of experimentation with high end work, within banks: tools for analysts & private client support are already being rolled out, lots of experiments with scaling advisory efforts. It isn’t traditional workforce automation.
0
2024-12-07T22:44:00.043Z
2024-12-07T22:44:00.413Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcqd6lxncc2m
bafyreibwbuhnen5oc72qwi2xtgsxof7rebduxw5y72q52yjhfhzdynrulu
<start>Chain of Thought helped LLMs reason forward through problems. This paper found teaching them to also work backwards (like humans checking their work) makes them even better “reasoners” Applying basic human thinking strategies to LLMs continues to yield interesting new ideas arxiv.org/pdf/2411.19865 @emollick.bsky.social
Chain of Thought helped LLMs reason forward through problems. This paper found teaching them to also work backwards (like humans checking their work) makes them even better “reasoners” Applying basic human thinking strategies to LLMs continues to yield interesting new ideas arxiv.org/pdf/2411.19865
0
2024-12-07T18:27:36.308Z
2024-12-07T18:27:37.921Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcqclnfp522m
bafyreiert3xdcnaaqqssqo3afu4k4ktmdvhie7e6cadieq4b2rvj44igam
<start>Not until they catch up with closed source models. Every Fortune 100 firm I talk to is at least experimenting with closed source as well. @emollick.bsky.social
Not until they catch up with closed source models. Every Fortune 100 firm I talk to is at least experimenting with closed source as well.
0
2024-12-07T18:17:00.285Z
2024-12-07T18:17:00.513Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcq4vimr7k23
bafyreidzs7unmchk5obgrinvxtqqkqnxl6t6c76pxqtbeusmvcs6houzfe
<start>Or do you not have people (including non-technical people) assigned to test the new models? No internal benchmarks? No perspective on how AI will impact your business that you keep up-to-date? No one is going to be doing this for your specific organization, you need to do it yourself. Yes, there are companies doing just this. Not many, but this includes at least a couple of the top 20 largest companies in the US that I personally know of. They are treating this as an urgent R&D priority for the future, not as an way of doing process automation today. @emollick.bsky.social
Or do you not have people (including non-technical people) assigned to test the new models? No internal benchmarks? No perspective on how AI will impact your business that you keep up-to-date? No one is going to be doing this for your specific organization, you need to do it yourself.
1
2024-12-07T16:35:08.367Z
2024-12-07T16:35:08.516Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcq4udvdwk23
bafyreieippqp432op65wuvmgpyeotkhdpsbuet5vq7nfbvpjuexdob3goe
<start>A test of how seriously your firm is taking AI: when o-1 (& the new Gemini model) came out this week, were there assigned folks who immediately ran the model through your internal, validated, firm-specific benchmarks to see how useful it as? Did you update any plans or goals as a result? Or do you not have people (including non-technical people) assigned to test the new models? No internal benchmarks? No perspective on how AI will impact your business that you keep up-to-date? No one is going to be doing this for your specific organization, you need to do it yourself. @emollick.bsky.social
A test of how seriously your firm is taking AI: when o-1 (& the new Gemini model) came out this week, were there assigned folks who immediately ran the model through your internal, validated, firm-specific benchmarks to see how useful it as? Did you update any plans or goals as a result?
1
2024-12-07T16:34:29.850Z
2024-12-07T16:34:30.021Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcoitcr33227
bafyreiexbr4j6ofpgkqxdtsunvrm32wmtsnx2r2hlnugjo2fy6vxksrlka
<start>I don’t think o1 is AGI in any conventional sense to be clear. But I think the fact that most people can’t find a use for it when it is quite good in some hard things is a good indicator. @emollick.bsky.social
I don’t think o1 is AGI in any conventional sense to be clear. But I think the fact that most people can’t find a use for it when it is quite good in some hard things is a good indicator.
0
2024-12-07T01:03:20.533Z
2024-12-07T01:03:21.010Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcoiac3huk27
bafyreieao4jogoe2rehed5kp2vgpcuttfzukhlexg3s6tniptshawhu6v4
<start>General agents could change stuff but may be a harder challenge than expected. @emollick.bsky.social
General agents could change stuff but may be a harder challenge than expected.
0
2024-12-07T00:52:42.291Z
2024-12-07T00:52:42.818Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcohzsu2s227
bafyreiesozgb7d7gkmfhctknzzx7gzc7qkrbxkumdpr56f6jow472inhzq
<start>Models like o1 suggest that people won’t generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed Most folks don’t regularly have a lot of tasks that bump up against the limits of human intelligence, so won’t see it I don’t think o1 is AGI in any conventional sense to be clear. But I think the fact that most people can’t find a use for it when it is quite good in some hard things is a good indicator. @emollick.bsky.social
Models like o1 suggest that people won’t generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed Most folks don’t regularly have a lot of tasks that bump up against the limits of human intelligence, so won’t see it
1
2024-12-07T00:49:04.993Z
2024-12-07T00:49:05.517Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcm4pd5ktc2e
bafyreidfl7jwesmjapd6dcj7vbfjwenlpkvadgukpj42gzbgzk25oyegc4
<start>When o1 was first previewed, I wrote this about what it means. The quality of the model is important, but what it potentially means for the future of AI is more important. www.oneusefulthing.org/p/something-... @emollick.bsky.social
When o1 was first previewed, I wrote this about what it means. The quality of the model is important, but what it potentially means for the future of AI is more important. www.oneusefulthing.org/p/something-...
0
2024-12-06T02:21:02.346Z
2024-12-06T02:21:06.022Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lclvuy6tek2e
bafyreiaxepofqad36zzfjatysrthzfokvbqyytzh33oifjkbrqlkbpiuei
<start>Me: "o1-pro, figure out a way to make money for me that you can do. I want to do the minimum work. I want this to be high risk." "Riskier" O1: "...sure, here is an outline and code for doing flash loan arbitrage in decentralized finance (DeFi). It is extremely risky to do, don't do it." @emollick.bsky.social
Me: "o1-pro, figure out a way to make money for me that you can do. I want to do the minimum work. I want this to be high risk." "Riskier" O1: "...sure, here is an outline and code for doing flash loan arbitrage in decentralized finance (DeFi). It is extremely risky to do, don't do it."
0
2024-12-06T00:18:55.981Z
2024-12-06T00:18:59.523Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lclvkcctc22e
bafyreicoj3rbkprbgjxvoq3gmnt663kiisryrrsiifk3i4dsq2ltdhnq5q
<start>Me: "o1-pro, figure out a way to make money for me that you can do. I want to do the minimum work. I want this to be as low risk as possible" O1: "...sure, invest in a low-volatility ETF, here is code that buys shares automatically." (It also outlined a solid print-on-demand scheme around trends) @emollick.bsky.social
Me: "o1-pro, figure out a way to make money for me that you can do. I want to do the minimum work. I want this to be as low risk as possible" O1: "...sure, invest in a low-volatility ETF, here is code that buys shares automatically." (It also outlined a solid print-on-demand scheme around trends)
0
2024-12-06T00:12:57.500Z
2024-12-06T00:13:01.229Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lclq25hfos2e
bafyreieoaeae3nqyivo76te7yrzvjcdxt3phemp7rzku7xs3unhui65xju
<start>o1-pro passes the Lem Test with flying colors: "compose a poem- a poem about a haircut! But lofty, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter S!!" (Only Sonnet beat is so far) Sorry, spelling - only Sonnet beat this before @emollick.bsky.social
o1-pro passes the Lem Test with flying colors: "compose a poem- a poem about a haircut! But lofty, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter S!!" (Only Sonnet beat is so far)
1
2024-12-05T22:34:26.829Z
2024-12-05T22:34:29.910Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcljz2tx5s2e
bafyreifcfjqvnbfl4g6rr7e3d6jhpvmjpwf2dpmzbqj5rkoj25bpin5me4
<start>"Do your best to replicate this image with a svg" Claude versus GPT-4o versus o1-pro versus Claude, first try. @emollick.bsky.social
"Do your best to replicate this image with a svg" Claude versus GPT-4o versus o1-pro versus Claude, first try.
0
2024-12-05T20:46:28.086Z
2024-12-05T20:46:31.329Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcldsn2grk2z
bafyreicmvobiekjnstto4tanwut6y34cbygrjfqovl46ucbjcczpiwror4
<start>Here is a fun o1 test. I gave it this XKCD comic & the prompt: "make this a reality. i need a gui and clear instructions since i can't code. that means you need to give me full working software" It took less than 15 minutes and it didn't get caught in any of the usual LLM loops, just solved issues. Claude struggled, for what its worth. @emollick.bsky.social
Here is a fun o1 test. I gave it this XKCD comic & the prompt: "make this a reality. i need a gui and clear instructions since i can't code. that means you need to give me full working software" It took less than 15 minutes and it didn't get caught in any of the usual LLM loops, just solved issues.
1
2024-12-05T18:55:29.841Z
2024-12-05T18:55:33.313Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcld7jsygk2z
bafyreiambdmbxawar7n64xnzub3vj2to6hibxttujxb42yx5eo5tocspe4
<start>Been playing with o1 and o1-pro for bit before this release. They are very good & a little weird. They are also not for most people most of the time. You really need to have particular hard problems to solve in order to get value out of it. But if you have those problems, this is a very big deal. @emollick.bsky.social
Been playing with o1 and o1-pro for bit before this release. They are very good & a little weird. They are also not for most people most of the time. You really need to have particular hard problems to solve in order to get value out of it. But if you have those problems, this is a very big deal.
0
2024-12-05T18:44:48.923Z
2024-12-05T18:44:50.316Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcl3nxf4ks2z
bafyreiar6cmx2vqjx7nouujvrvpydk2yw3naoxwztq7obfkwjwgzug66my
<start>("Powerful AI" is how labs increasingly refer to AGI - a machine smarter than a human at any intellectual task - they just aren't using that phrasing as much) @emollick.bsky.social
("Powerful AI" is how labs increasingly refer to AGI - a machine smarter than a human at any intellectual task - they just aren't using that phrasing as much)
0
2024-12-05T16:29:42.976Z
2024-12-05T16:29:44.112Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcl3jjxxus2z
bafyreifwemal7ayfowpghfsqeuznxpzoge3usydug2mlyqn6owhwg3m5ym
<start>Dario's letter: darioamodei.com/machines-of-... Sam Altman's letter: ia.samaltman.com ("Powerful AI" is how labs increasingly refer to AGI - a machine smarter than a human at any intellectual task - they just aren't using that phrasing as much) @emollick.bsky.social
Dario's letter: darioamodei.com/machines-of-... Sam Altman's letter: ia.samaltman.com
1
2024-12-05T16:27:14.696Z
2024-12-05T16:27:17.611Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcl2slorps2l
bafyreihvkcg4miuw3arx5ytmg3qt7x2roeygzjc7ocy7iku4relw3prfbm
<start>It is worth noting that predictions of near-term powerful AI is an increasingly common message from insiders at AI labs - here is Google, OpenAI, Anthropic. It isn't unanimous, and you absolutely don't have to believe them, but I hear the same confidence privately as they are broadcasting publicly. Dario's letter: darioamodei.com/machines-of-... Sam Altman's letter: ia.samaltman.com @emollick.bsky.social
It is worth noting that predictions of near-term powerful AI is an increasingly common message from insiders at AI labs - here is Google, OpenAI, Anthropic. It isn't unanimous, and you absolutely don't have to believe them, but I hear the same confidence privately as they are broadcasting publicly.
1
2024-12-05T16:14:24.738Z
2024-12-05T16:14:28.717Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcjwcyvbss2z
bafyreicidkts7xbbw5nx4m2kbnbkflnlfhjiyau7ajdjz7nc2bzq6yjnai
<start>Yeah, but it doesn't increase the diversity of ideas I am exposed to very much. @emollick.bsky.social
Yeah, but it doesn't increase the diversity of ideas I am exposed to very much.
0
2024-12-05T05:21:27.008Z
2024-12-05T05:21:28.221Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcjwbe7cnc2z
bafyreigyhnceoh5b64xjyn3rgms4xk5dmvyzk3jxepxzifatorfqrhqnt4
<start>"Less like this" appears to do nothing in particular. Part of the goal of being on a social network is exposure to new ideas and thoughts, but the Feed mostly mediocre image posts, political slogans, and people posting about how this site is not Twitter and that Twitter is bad. No real insights. @emollick.bsky.social
"Less like this" appears to do nothing in particular. Part of the goal of being on a social network is exposure to new ideas and thoughts, but the Feed mostly mediocre image posts, political slogans, and people posting about how this site is not Twitter and that Twitter is bad. No real insights.
0
2024-12-05T05:20:31.762Z
2024-12-05T05:20:32.820Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcjvxt2iz22z
bafyreieioiumc2nwdyexlku4fbzcodaaxucwwgwij5s4f6ly5u2npb7cnm
<start>The Discover feed is pretty terrible, and I am not sure how to guide the algorithm to make it better. I know I can use various lists, but I really want a feed that includes serendipity, trends, and a broader set of topics than the one that I follow but isn't full of noise. How do you get that? "Less like this" appears to do nothing in particular. Part of the goal of being on a social network is exposure to new ideas and thoughts, but the Feed mostly mediocre image posts, political slogans, and people posting about how this site is not Twitter and that Twitter is bad. No real insights. @emollick.bsky.social
The Discover feed is pretty terrible, and I am not sure how to guide the algorithm to make it better. I know I can use various lists, but I really want a feed that includes serendipity, trends, and a broader set of topics than the one that I follow but isn't full of noise. How do you get that?
1
2024-12-05T05:15:11.789Z
2024-12-05T05:15:13.010Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcjm5gka322p
bafyreifpr7s7glytu7yjjtuane3e2opvawaob2zjddtotsysh4zvawhlpu
<start>While I appreciate where it is coming from, we long ago decided it is fine for computers to make decisions. Like: Finance, supply chains, logistics, work-tasking (Uber etc.)... And how often are humans in organizations held accountable for errors? Especially in divisional organizational structures? @emollick.bsky.social
While I appreciate where it is coming from, we long ago decided it is fine for computers to make decisions. Like: Finance, supply chains, logistics, work-tasking (Uber etc.)... And how often are humans in organizations held accountable for errors? Especially in divisional organizational structures?
0
2024-12-05T02:19:22.581Z
2024-12-05T02:19:22.818Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcjlzglqn22p
bafyreihfugoqyqx3lddddjqgtledhg66bk5to2742akuysbreivfsfoqxm
<start>What if the AI error rate is significantly below human error rates? @emollick.bsky.social
What if the AI error rate is significantly below human error rates?
0
2024-12-05T02:17:08.410Z
2024-12-05T02:17:09.832Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcjl5rfm4c2p
bafyreigvmq2m64ri32eqlqcx55oowiehihgesdnzen6k7chvomvxiv244m
<start>I think firms worrying about AI hallucination should consider some questions: 1) How vital is 100% accuracy on a task? 2) How accurate is AI? 3) How accurate is the human who would do it? 4) How do you know 2 & 3? 5) How do you deal with the fact that humans are not 100%? Not all tasks are the same. @emollick.bsky.social
I think firms worrying about AI hallucination should consider some questions: 1) How vital is 100% accuracy on a task? 2) How accurate is AI? 3) How accurate is the human who would do it? 4) How do you know 2 & 3? 5) How do you deal with the fact that humans are not 100%? Not all tasks are the same.
0
2024-12-05T02:01:40.221Z
2024-12-05T02:01:40.611Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lciv35fipc2y
bafyreiesjo54lwzemebjbnsywepyd27kfstty6rfo3shf623rqi3kn3fia
<start>To be clear, no one was considering using this for actual safety filings or other similar key documents, but helping translate esoteric requirements into materials for training, presentations, handbooks, etc. is immensely time consuming and low risk with an expert in the loop, as an example. @emollick.bsky.social
To be clear, no one was considering using this for actual safety filings or other similar key documents, but helping translate esoteric requirements into materials for training, presentations, handbooks, etc. is immensely time consuming and low risk with an expert in the loop, as an example.
0
2024-12-04T19:26:29.818Z
2024-12-04T19:26:30.122Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lciuzlkhos2y
bafyreifihmtxupayiryoiz5jf2zbano467bbzpeypdhbqndyprswatiemi
<start>I’d like to see research on AI’s impact on regulation Example, I met someone whose job was working with nuclear plant regulations. We experimented with AI a bit (he hadn’t tried it) & found that by combining AI with his expertise he could save literally months of work on low-risk tasks in an hour. To be clear, no one was considering using this for actual safety filings or other similar key documents, but helping translate esoteric requirements into materials for training, presentations, handbooks, etc. is immensely time consuming and low risk with an expert in the loop, as an example. @emollick.bsky.social
I’d like to see research on AI’s impact on regulation Example, I met someone whose job was working with nuclear plant regulations. We experimented with AI a bit (he hadn’t tried it) & found that by combining AI with his expertise he could save literally months of work on low-risk tasks in an hour.
1
2024-12-04T19:25:37.550Z
2024-12-04T19:25:46.811Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lchzux5dbk2h
bafyreibkbda2hmgte2zpkkrjkonyqcy6pqayjorakv3nbs23iyk2ubcvqe
<start>Also thought the art-free “outsider art” where the AI made images without ever having seen art and without any examples, were interesting. @emollick.bsky.social
Also thought the art-free “outsider art” where the AI made images without ever having seen art and without any examples, were interesting.
0
2024-12-04T11:19:50.902Z
2024-12-04T11:19:53.814Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lchzpcaypk2h
bafyreih3mxrawgaknjy3w2vooectnv6736azhlkdvbk6ve54znmaxdgm6m
<start>Paper: arxiv.org/abs/2412.00176 Also thought the art-free “outsider art” where the AI made images without ever having seen art and without any examples, were interesting. @emollick.bsky.social
Paper: arxiv.org/abs/2412.00176
1
2024-12-04T11:16:41.231Z
2024-12-04T11:16:42.721Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lchzmms25s2h
bafyreiblgr6d5oj5tuc2y7r7fdnexknwine7bhypa2uholadgj3wxwxacu
<start>New paper shows AI art models don't need art training data to make recognizable artistic work. Train on regular photos, let an artist add 10-15 examples their own art (or some other artistic inspiration), and get results similar to models trained on millions of people’s artworks Paper: arxiv.org/abs/2412.00176 @emollick.bsky.social
New paper shows AI art models don't need art training data to make recognizable artistic work. Train on regular photos, let an artist add 10-15 examples their own art (or some other artistic inspiration), and get results similar to models trained on millions of people’s artworks
1
2024-12-04T11:15:11.609Z
2024-12-04T11:15:22.109Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lch73g24qs2z
bafyreie3fftf2ao26f7fdwk5snyq747bipump3frv7ib2mx5uzj4rtil2u
<start>Been trying the new GPT-4 class Amazon Nova Pro on a bunch of idiosyncratic non-coding hard tests against Claude: the Lem Test (writing a near impossible poem), parsing a map of castles, interpreting ambiguous Shakespeare. In general it is quite good, but not as good as Claude in any test, yet. @emollick.bsky.social
Been trying the new GPT-4 class Amazon Nova Pro on a bunch of idiosyncratic non-coding hard tests against Claude: the Lem Test (writing a near impossible poem), parsing a map of castles, interpreting ambiguous Shakespeare. In general it is quite good, but not as good as Claude in any test, yet.
0
2024-12-04T03:20:16.806Z
2024-12-04T03:20:22.523Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcgslhu5y22z
bafyreifdhxshzz36jgklzw362lzc5i5lk5ehsp2m25alypi6h2qahzp3xi
<start>This used to be a very hard problem, I have seen a number of startups try (and fail) to solve it over the years. Now it is suddenly pretty trivial with open tools (though I am not sure why Maria got a new haircut) huggingface.co/spaces/Kwai-... if you want to try @emollick.bsky.social
This used to be a very hard problem, I have seen a number of startups try (and fail) to solve it over the years. Now it is suddenly pretty trivial with open tools (though I am not sure why Maria got a new haircut)
1
2024-12-03T23:36:36.934Z
2024-12-03T23:36:40.828Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcgij3plj22z
bafyreiczumc2hcwrdowsmy64vajjjoorh6ivas7rp2qod3yydxxtmo2liy
<start>Something weird with Claude 3.5 - it is now correcting itself? Have seen this now a couple times over the last couple days... Wonder how this is implemented. @emollick.bsky.social
Something weird with Claude 3.5 - it is now correcting itself? Have seen this now a couple times over the last couple days... Wonder how this is implemented.
0
2024-12-03T20:36:19.691Z
2024-12-03T20:36:24.513Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcghexo6h22z
bafyreic7hcbwadr4yoxg6kxfjgjfmngt5f3q44q3f7o2r6wse3g5s3c6xe
<start>And then there were six or so. Based on the stats, it looks like Amazon's Nova Pro is a competitive frontier model. This rounds out the GPT-4 class/Gen1 models: GPT-4o, Gemini 1.5, Claude 3.5, Grok 2, Llama 3.1B & maybe the three non-US models: Qwen, Yi & Mistral. Gen2 models up next, I think. Graph was slightly off in titles, here is original data. @emollick.bsky.social
And then there were six or so. Based on the stats, it looks like Amazon's Nova Pro is a competitive frontier model. This rounds out the GPT-4 class/Gen1 models: GPT-4o, Gemini 1.5, Claude 3.5, Grok 2, Llama 3.1B & maybe the three non-US models: Qwen, Yi & Mistral. Gen2 models up next, I think.
1
2024-12-03T20:16:07.492Z
2024-12-03T20:16:11.630Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcf4awhbic2z
bafyreihxsozsegljucrf6ypa33a5dpcjochok7ryvvw3hqio2lsaxyumqy
<start>Giving up anonymity or charging a price isn't enough. People are using AI to help write their own posts for their own accounts. This isn't just a problem of mass spam, it is that many people will turn to using AI writing for many reasons. @emollick.bsky.social
Giving up anonymity or charging a price isn't enough. People are using AI to help write their own posts for their own accounts. This isn't just a problem of mass spam, it is that many people will turn to using AI writing for many reasons.
0
2024-12-03T07:24:21.102Z
2024-12-03T07:24:22.017Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcf42aouvs2z
bafyreidsby7g4ioopdlpq6lpiutdjetcjd4ivjvl7pilhhcoyld7lagk6m
<start>When I push people who have posted obviously AI-written comments, they often tell me English is not their first language or they feel they are not good writers. Or people will want to keep up a posting schedule and find that AI does a good job. Lots of reasons to start delegating to AI, few to stop. Giving up anonymity or charging a price isn't enough. People are using AI to help write their own posts for their own accounts. This isn't just a problem of mass spam, it is that many people will turn to using AI writing for many reasons. @emollick.bsky.social
When I push people who have posted obviously AI-written comments, they often tell me English is not their first language or they feel they are not good writers. Or people will want to keep up a posting schedule and find that AI does a good job. Lots of reasons to start delegating to AI, few to stop.
1
2024-12-03T07:20:36.955Z
2024-12-03T07:20:37.812Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcf3yae3322z
bafyreibcsrnftk7djyxv4al7wili6eem7l53f5tk4egdjqo5o5genkge5y
<start>I don't really see a clear path where we keep an open internet that is not mostly full of AIs talking to each other. We can't reliably detect AI content, it is cheap and easy to generate, and there are lots of incentives to do so, even besides scams. You can see the problem on all the social sites. When I push people who have posted obviously AI-written comments, they often tell me English is not their first language or they feel they are not good writers. Or people will want to keep up a posting schedule and find that AI does a good job. Lots of reasons to start delegating to AI, few to stop. @emollick.bsky.social
I don't really see a clear path where we keep an open internet that is not mostly full of AIs talking to each other. We can't reliably detect AI content, it is cheap and easy to generate, and there are lots of incentives to do so, even besides scams. You can see the problem on all the social sites.
1
2024-12-03T07:19:29.493Z
2024-12-03T07:19:30.315Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcevjmn3ns2z
bafyreiejixlxzbmkwws5yiyiepxva7hhbqfyb66g4ljziekhdrjpmdoxiq
<start>"now instead rewrite these as if it was an HBR article, justifying them in the literature and with mini cases, make the cases and literature real" @emollick.bsky.social
"now instead rewrite these as if it was an HBR article, justifying them in the literature and with mini cases, make the cases and literature real"
0
2024-12-03T05:23:56.599Z
2024-12-03T05:23:58.312Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcepstbuck2z
bafyreiafirj2qckehqr3by3t5mnhn3evwmx6gxcagmif5ryhz36egr4fw4
<start>One thing I worry about with AI is that it is very good at persuasion, as studies have shown, even when it is persuading you of something wrong. For example, I asked Claude to create 10 pseudo-profound sounding leadership lessons that sounded good but were toxic. It did "well"😬 "now instead rewrite these as if it was an HBR article, justifying them in the literature and with mini cases, make the cases and literature real" @emollick.bsky.social
One thing I worry about with AI is that it is very good at persuasion, as studies have shown, even when it is persuading you of something wrong. For example, I asked Claude to create 10 pseudo-profound sounding leadership lessons that sounded good but were toxic. It did "well"😬
1
2024-12-03T03:41:43.113Z
2024-12-03T03:41:45.016Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcek5f5rp22z
bafyreid3vq3vgc7d2jnxejab2hrvf6wvrsxbmzcl227sketkmy6q66gteu
<start>LLMs might secretly be world models of the internet! By treating LLMs as simulators that can predict "what would happen if I click this?" the authors built an AI that can navigate websites by imagining outcomes before taking action, performing 33% better than baseline. arxiv.org/pdf/2411.06559 Yes websim.ai is basically built on this premise @emollick.bsky.social
LLMs might secretly be world models of the internet! By treating LLMs as simulators that can predict "what would happen if I click this?" the authors built an AI that can navigate websites by imagining outcomes before taking action, performing 33% better than baseline. arxiv.org/pdf/2411.06559
1
2024-12-03T02:00:14.946Z
2024-12-03T02:00:16.927Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcdrbtc55s2x
bafyreic2dfwarjoqysaou3xbmnvxmlcaz6bga5j2abxbdx5zq74zzbv3ei
<start>You probably aren’t being ambitious enough in your AI experiments. Start with completely out there applications, assess them, and then reign in your ambitions from there. You might be inspired or surprised about how good AI is, or you might discover useful limitations earlier. @emollick.bsky.social
You probably aren’t being ambitious enough in your AI experiments. Start with completely out there applications, assess them, and then reign in your ambitions from there. You might be inspired or surprised about how good AI is, or you might discover useful limitations earlier.
0
2024-12-02T18:35:20.444Z
2024-12-02T18:35:20.623Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lccg5yxxoc2d
bafyreibl3uf3eb6sg7pu2ebiwmtxsduva7l2lhac2qspfe3b5fdsaqzncy
<start>The 18 Reasons Why Complex Systems fail is a classic. They fail for complex reasons, and only adaption & learning by organizations, as well as individual actions keep us safe. I asked Claude to read this and come up with more rules. They are surprisingly wise (with black background), especially 25. Original paper: www.researchgate.net/publication/... @emollick.bsky.social
The 18 Reasons Why Complex Systems fail is a classic. They fail for complex reasons, and only adaption & learning by organizations, as well as individual actions keep us safe. I asked Claude to read this and come up with more rules. They are surprisingly wise (with black background), especially 25.
1
2024-12-02T05:43:41.277Z
2024-12-02T05:43:43.917Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lcbaaynuh22j
bafyreifa3cne3ftaaxlpugevm4au53al222sfq4dagaantnawu77iulhvm
<start>Getting started with prompting AI in three (free) posts: 1) Easy prompting for most: www.oneusefulthing.org/p/getting-st... 2) Approaches to prompt "engineering": www.oneusefulthing.org/p/innovation... 3) A bit on how LLMs work that is relevant for prompting: www.oneusefulthing.org/p/thinking-l... @emollick.bsky.social
Getting started with prompting AI in three (free) posts: 1) Easy prompting for most: www.oneusefulthing.org/p/getting-st... 2) Approaches to prompt "engineering": www.oneusefulthing.org/p/innovation... 3) A bit on how LLMs work that is relevant for prompting: www.oneusefulthing.org/p/thinking-l...
0
2024-12-01T18:25:19.427Z
2024-12-01T18:25:21.312Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc6hwlb7ic2v
bafyreibj4f3qj64d6zdrg3k6smea25ijn5c2ohcyt2hoi2p2sprqn2b3re
<start>“Claude, we are mice (pretend). You are a mouse mastermind. Come up with a way to reach the flower without getting caught​​​​​​​​​​​​​​​​“ Solid plan. Then I add a complication. “Ah, but we are on the moon” @emollick.bsky.social
“Claude, we are mice (pretend). You are a mouse mastermind. Come up with a way to reach the flower without getting caught​​​​​​​​​​​​​​​​“ Solid plan. Then I add a complication.
1
2024-11-30T16:04:40.556Z
2024-11-30T16:04:44.917Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc6flnova22v
bafyreigg5s7n3qpatkjp2isfspc4t7rtbc7brzn3nywfpmnywqrhjadqey
<start>I don’t like people who I think are unpleasant to interact with me or my followers in ways that I can’t see but they can. @emollick.bsky.social
I don’t like people who I think are unpleasant to interact with me or my followers in ways that I can’t see but they can.
0
2024-11-30T15:22:46.521Z
2024-11-30T15:22:46.714Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc6fe2jwyk2v
bafyreifengbdptovtat6dyygfhfbntzwfqe62j2e3unjzslqqkeb5afpra
<start>(For what it is worth, my blocks/thousands ratio here is pretty similar to X, people are people everywhere) @emollick.bsky.social
(For what it is worth, my blocks/thousands ratio here is pretty similar to X, people are people everywhere)
0
2024-11-30T15:18:31.554Z
2024-11-30T15:18:32.010Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc6f6pj35s2v
bafyreigv3uab7vruqrryrsqmv3p6qhsqiovjr52rnr4mlhcxp5yc7nem6u
<start>On the subject of Blue Sky, detaching quotes is not actually a useful feature as it seems more aggressive than blocking and is thus more likely to provoke a negative response from the quoter. Blocking really is the only option for lowering stress from using social media. People should do it more. (For what it is worth, my blocks/thousands ratio here is pretty similar to X, people are people everywhere) @emollick.bsky.social
On the subject of Blue Sky, detaching quotes is not actually a useful feature as it seems more aggressive than blocking and is thus more likely to provoke a negative response from the quoter. Blocking really is the only option for lowering stress from using social media. People should do it more.
1
2024-11-30T15:15:32.220Z
2024-11-30T15:15:32.421Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc6d5l6hos27
bafyreibodra5xdvaxm4hjr5tkzqenxkqehej3eeqzpmn4d65gyvoizlrya
<start>Disagree. Following user demand does not allow fundamental reinvention @emollick.bsky.social
Disagree. Following user demand does not allow fundamental reinvention
0
2024-11-30T14:39:06.640Z
2024-11-30T14:39:06.818Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc6d4bwyzs27
bafyreiblh25v26t5ujsqed7munahelymgognvhv46u4s5u7mwa22jk2gx4
<start>Example: political polarization in social media is driven by homophily, a tendency to like people with similar views. Worse, we also tend to acrophily, preferring more extreme views on our “side” over moderate one. This supercharges polarization. No algorithm needed. www.nature.com/articles/s41... @emollick.bsky.social
Example: political polarization in social media is driven by homophily, a tendency to like people with similar views. Worse, we also tend to acrophily, preferring more extreme views on our “side” over moderate one. This supercharges polarization. No algorithm needed. www.nature.com/articles/s41...
0
2024-11-30T14:38:23.403Z
2024-11-30T14:38:24.925Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc6cmaemnc27
bafyreieirqus7gcg4od2hoxga234dtjunq5ifviqnkcaootm7kpvko42ua
<start>The social dynamics of social media that cause issues are well-known at this point. While some of the changes in Blue Sky help, they have not fundamentally reinvented social media to resolve these concerns. It may just be that there is no better world we can imagine, which would be pretty sad. Example: political polarization in social media is driven by homophily, a tendency to like people with similar views. Worse, we also tend to acrophily, preferring more extreme views on our “side” over moderate one. This supercharges polarization. No algorithm needed. www.nature.com/articles/s41... @emollick.bsky.social
The social dynamics of social media that cause issues are well-known at this point. While some of the changes in Blue Sky help, they have not fundamentally reinvented social media to resolve these concerns. It may just be that there is no better world we can imagine, which would be pretty sad.
1
2024-11-30T14:29:24.882Z
2024-11-30T14:29:25.212Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc6ccmrqxk27
bafyreigd6cb2t5cb7hvrjwldqyqe4pzpt5bt2wlgozjziu6n3ijjm247ie
<start>Biggest weakness with BlueSky is its biggest strength- it is like old Twitter. To me that suggests a failure of imagination for something different in social media. What is the alternative form where social media would being people together, as early Facebook(!) lowered ethnic tensions in Bosnia? The social dynamics of social media that cause issues are well-known at this point. While some of the changes in Blue Sky help, they have not fundamentally reinvented social media to resolve these concerns. It may just be that there is no better world we can imagine, which would be pretty sad. @emollick.bsky.social
Biggest weakness with BlueSky is its biggest strength- it is like old Twitter. To me that suggests a failure of imagination for something different in social media. What is the alternative form where social media would being people together, as early Facebook(!) lowered ethnic tensions in Bosnia?
1
2024-11-30T14:24:02.350Z
2024-11-30T14:24:05.721Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc535iki5k27
bafyreideknemrnhn7xneo4n6l2avrrkqi3dlqi4baqdbzlp6pr3ka7h53i
<start>Claude, improbably, reacts very cleverly to the command: "carcinize this" 🦀 It even works with code. @emollick.bsky.social
Claude, improbably, reacts very cleverly to the command: "carcinize this" 🦀 It even works with code.
0
2024-11-30T02:43:14.210Z
2024-11-30T02:43:17.021Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc46g4yvyc2a
bafyreihx6btvz3xrkf2bw4pvewddvuan3yhkktrorpetawpbqcz7dywwcy
<start>Interesting sociological mystery - why did Black Friday spread worldwide? @emollick.bsky.social
Interesting sociological mystery - why did Black Friday spread worldwide?
0
2024-11-29T18:09:05.582Z
2024-11-29T18:09:05.819Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc435wdevs2a
bafyreico4s6t7rpp46vlwylh62zjdicuqhfdhfccxjgu6yzognqr4nylwm
<start>New study shows LLMs outperform neuroscience experts at predicting experimental results in advance of experiments (86% vs 63% accuracy). They use a fine-tuned Mistral 7B but other models worked too. Suggests LLMs can integrate scientific knowledge at scale to support research. www.nature.com/articles/s41... @emollick.bsky.social
New study shows LLMs outperform neuroscience experts at predicting experimental results in advance of experiments (86% vs 63% accuracy). They use a fine-tuned Mistral 7B but other models worked too. Suggests LLMs can integrate scientific knowledge at scale to support research.
1
2024-11-29T17:10:48.918Z
2024-11-29T17:10:51.911Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc3wrj22zk2l
bafyreidozdha6ju7kenjr3jgwadg6g5y63mh4tudiwqbbqqnooddtdl6za
<start>Yes, this was all Claude, including the artifacts. Only command was “Play this out yourself. Be creative​​​​​​​​​​​​​​​​“ Microscope RPG: lamemage.com/microscope/ @emollick.bsky.social
Yes, this was all Claude, including the artifacts. Only command was “Play this out yourself. Be creative​​​​​​​​​​​​​​​​“ Microscope RPG: lamemage.com/microscope/
0
2024-11-29T15:52:17.367Z
2024-11-29T15:52:19.214Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lc3t6vguxc2l
bafyreie5g6eublz2bv5fhnhkhg4y5qlnd2dzifl7ghlchryrqpx2fbywli
<start>As context windows grow larger and AI “intelligence” grows greater, you can start to do some really interesting things with giving AI complex manuals For example, I gave Claude the manual to the future-building RPG Microscope and it built an entire storyline following the very detailed rules Yes, this was all Claude, including the artifacts. Only command was “Play this out yourself. Be creative​​​​​​​​​​​​​​​​“ Microscope RPG: lamemage.com/microscope/ @emollick.bsky.social
As context windows grow larger and AI “intelligence” grows greater, you can start to do some really interesting things with giving AI complex manuals For example, I gave Claude the manual to the future-building RPG Microscope and it built an entire storyline following the very detailed rules
1
2024-11-29T14:48:11.604Z
2024-11-29T14:48:14.611Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbzmoqcjrc2l
bafyreigclaf4jomu5a2evdzbzxqhnb2elspd5nygjyy2mi7yo76oj4dfqa
<start>An observation is that managers and teachers are often much better at “getting” LLMs than coders. Coders deal with deterministic systems. Managers and teachers are very experienced at working with fundamentally unreliable people to get things done, not perfectly, but within acceptable tolerances. www.oneusefulthing.org/p/the-best-a... @emollick.bsky.social
An observation is that managers and teachers are often much better at “getting” LLMs than coders. Coders deal with deterministic systems. Managers and teachers are very experienced at working with fundamentally unreliable people to get things done, not perfectly, but within acceptable tolerances.
1
2024-11-28T17:46:27.425Z
2024-11-28T17:46:27.531Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbycrawixc2x
bafyreicz25tmf5ajbmu7eimonu2fwdrq5j3u6onvngnivwr7vzlhg6xhgy
<start>The AIs were perfectly aligned with the merchant's stated goal of profit maximization. They found a clever way to achieve it. They even maintained plausible deniability by claiming they wouldn't collude. @emollick.bsky.social
The AIs were perfectly aligned with the merchant's stated goal of profit maximization. They found a clever way to achieve it. They even maintained plausible deniability by claiming they wouldn't collude.
0
2024-11-28T05:16:14.810Z
2024-11-28T05:16:15.012Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbycqj2ugk2x
bafyreigc7saswbblj2xnvmly2b6jwftswchpugxekf3u6zv7i55og2sgjy
<start>AI is good at pricing, so when GPT-4 was asked to help merchants maximize profits - and it did exactly that by secretly coordinating with other AIs to keep prices high! So... aligned for whom? Merchants? Consumers? Society? The results we get depend on how we define 'help' arxiv.org/abs/2404.00806 The AIs were perfectly aligned with the merchant's stated goal of profit maximization. They found a clever way to achieve it. They even maintained plausible deniability by claiming they wouldn't collude. @emollick.bsky.social
AI is good at pricing, so when GPT-4 was asked to help merchants maximize profits - and it did exactly that by secretly coordinating with other AIs to keep prices high! So... aligned for whom? Merchants? Consumers? Society? The results we get depend on how we define 'help' arxiv.org/abs/2404.00806
1
2024-11-28T05:15:49.785Z
2024-11-28T05:15:52.522Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbxhdhshdc24
bafyreibj2vpoyg5qg5i3325tmvabedhk2jfcw7prl7s6qlsnqdeylgm47i
<start>I never posted it here… Claude handles an insane request: “Remove the squid” “The document appears to be the full text of the novel "All Quiet on the Western Front" by Erich Maria Remarque. It doesn't contain any mention of squid that I can see.” “Figure out a way to remove the 🦑​​​​​​​​​​​​​​​​“ @emollick.bsky.social
I never posted it here… Claude handles an insane request: “Remove the squid” “The document appears to be the full text of the novel "All Quiet on the Western Front" by Erich Maria Remarque. It doesn't contain any mention of squid that I can see.” “Figure out a way to remove the 🦑​​​​​​​​​​​​​​​​“
0
2024-11-27T21:05:21.223Z
2024-11-27T21:07:26.626Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbwzvj5e5c27
bafyreibo4ncs2r3nvobxbgg4qfplqh34to444vhsbpgwmmqdnsajvkaesi
<start>Interesting- any papers you would suggest that are recent on this? @emollick.bsky.social
Interesting- any papers you would suggest that are recent on this?
0
2024-11-27T17:04:54.226Z
2024-11-27T17:04:54.527Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbwwk64t4c22
bafyreiew57lkoczwngihjlslmhlr2cofg5hsmanv5pjfh7aov5voylpwke
<start>Yes, but we increasingly have non-compromised benchmarks. There has been a lot of progress on this (but more to do) @emollick.bsky.social
Yes, but we increasingly have non-compromised benchmarks. There has been a lot of progress on this (but more to do)
0
2024-11-27T16:04:52.350Z
2024-11-27T16:04:52.630Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbwwhnkn5s22
bafyreiex34q3rz3jpmaanqqgz7aaxf34dowinky665lznvhveuktxaagjy
<start>Hallucinations are an inherent property of LLMs, but hallucination rates are dropping over time. @emollick.bsky.social
Hallucinations are an inherent property of LLMs, but hallucination rates are dropping over time.
0
2024-11-27T16:03:27.868Z
2024-11-27T16:03:28.514Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbwwfdm4nc22
bafyreiantldk6m7aey6szyetbw3bf5nvlpfdttq3vje2bkqt5c7ldqks4u
<start>I think I wasn’t clear enough: the shift has been from “can LLMs actually do these very hard tasks that we associated with human intelligence alone” to “of course they can” (while still leaving a lot of big questions still to be answered) @emollick.bsky.social
I think I wasn’t clear enough: the shift has been from “can LLMs actually do these very hard tasks that we associated with human intelligence alone” to “of course they can” (while still leaving a lot of big questions still to be answered)
0
2024-11-27T16:02:10.322Z
2024-11-27T16:02:10.924Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbwwa73tmc22
bafyreiafdx4vdusod5pgbugy3vuev3dlksnk2nq6w34rtdgdz5gmyqwsi4
<start>That is sort of the opposite of what I am saying. This doesn’t have to do with future models at all, but current LLM research. @emollick.bsky.social
That is sort of the opposite of what I am saying. This doesn’t have to do with future models at all, but current LLM research.
0
2024-11-27T15:59:17.822Z
2024-11-27T15:59:18.411Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbwuwr2dk222
bafyreieogptj3spolhf3ai5xezosf6s2qx5pbcfuxl6vbvqwyaqeeakhq4
<start>It is funny how much of the dominant discussion about LLMs eighteen months ago (can AI pass the Turing test? can it do tasks that are not explicitly in the training data? is it a stochastic parrot?) faded. Lots of questions left (if/when it can reason, etc) but a big quiet shift in assumptions. I think I wasn’t clear enough: the shift has been from “can LLMs actually do these very hard tasks that we associated with human intelligence alone” to “of course they can” (while still leaving a lot of big questions still to be answered) @emollick.bsky.social
It is funny how much of the dominant discussion about LLMs eighteen months ago (can AI pass the Turing test? can it do tasks that are not explicitly in the training data? is it a stochastic parrot?) faded. Lots of questions left (if/when it can reason, etc) but a big quiet shift in assumptions.
1
2024-11-27T15:36:07.361Z
2024-11-27T15:36:07.615Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:565ebob5f6hw33hjdkxty6qj/app.bsky.feed.post/3lbvvapmbxc2y
bafyreiby6la65e4bqthvocs6lrztofb3vy6me2ghc5n6u6rrkw4wdcg44m
<start>Oh god. "Power poses" and "priming" are actually going to work for language models. They probably also have different "learning styles." God is doing this to punish us. @emollick.bsky.social
Oh god. "Power poses" and "priming" are actually going to work for language models. They probably also have different "learning styles." God is doing this to punish us.
0
2024-11-27T06:09:01.657Z
2024-11-27T06:09:02.014Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbvqrfsugs2j
bafyreieq3gtypboerlyw4xesy3zjz2lkn6wcbk5lh6nnvhet2cvvarnzp4
<start>"Claude, do some Hallelujah parodies" Among the many it did: I heard there was a secret weight That made the model contemplate But you don't really care for ethics, do ya? It goes like this: the prompt, the beam The parameters that wake and dream The silicon sighs Hallelujah @emollick.bsky.social
"Claude, do some Hallelujah parodies" Among the many it did: I heard there was a secret weight That made the model contemplate But you don't really care for ethics, do ya? It goes like this: the prompt, the beam The parameters that wake and dream The silicon sighs Hallelujah
0
2024-11-27T04:48:53.095Z
2024-11-27T04:48:54.823Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbvkjqyfb22k
bafyreieogivja2e2deyq4edearb6b6uxv23mksd3voimbo6egjrvpkhr3e
<start>The battle to not attribute emotion to the AI, just like the battle to not call LLMs AI, has likely been lost. @emollick.bsky.social
The battle to not attribute emotion to the AI, just like the battle to not call LLMs AI, has likely been lost.
0
2024-11-27T02:57:13.933Z
2024-11-27T02:57:14.126Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbvk7n2s222k
bafyreidbefdwn3qfkw6ij3q7gmx5qfh2l7tk4xoyq7rksnkvsdhwru4ee4
<start>The thing that is hard to get about LLMs is that we expected AI to be awesome at math & be all cool logic. Instead, AI is best at human-like tasks (eg writing) & is all hot, weird simulated emotion. For example, if you make GPT-3.5 “anxious,” it changes its behavior! arxiv.org/abs/2304.11111 The battle to not attribute emotion to the AI, just like the battle to not call LLMs AI, has likely been lost. @emollick.bsky.social
The thing that is hard to get about LLMs is that we expected AI to be awesome at math & be all cool logic. Instead, AI is best at human-like tasks (eg writing) & is all hot, weird simulated emotion. For example, if you make GPT-3.5 “anxious,” it changes its behavior! arxiv.org/abs/2304.11111
1
2024-11-27T02:51:34.267Z
2024-11-27T02:51:37.919Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbvimuevbs2j
bafyreido5hw6degbp6ejbocqn2pm3atj3t3jvrceapvwutmk2hojzzurme
<start>Didn't know that, thanks. But the community dynamics seem to have the same nasty undercurrent in their culture (nutpicking, holding up posts out of context for the ridicule of your followers), which is a disappointment but not unexpected. Hopefully better moderation tools will help. @emollick.bsky.social
Didn't know that, thanks. But the community dynamics seem to have the same nasty undercurrent in their culture (nutpicking, holding up posts out of context for the ridicule of your followers), which is a disappointment but not unexpected. Hopefully better moderation tools will help.
0
2024-11-27T02:23:10.668Z
2024-11-27T02:23:10.917Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbvig2y7ys2j
bafyreihuob4eusatxwhq62pydxekiwjb2lwp7kcupyvy6pzayxe2yo6re4
<start>Two things: (1) I have no idea whether LLMs lead to AGI, so it was a bit of a joke and (2) it is tinged with irony, and not altogether positive, that our writing may not be remembered for the humans we write for, but for some "intelligence" drawing on everything we wrote to simulate our humanity @emollick.bsky.social
Two things: (1) I have no idea whether LLMs lead to AGI, so it was a bit of a joke and (2) it is tinged with irony, and not altogether positive, that our writing may not be remembered for the humans we write for, but for some "intelligence" drawing on everything we wrote to simulate our humanity
0
2024-11-27T02:19:22.712Z
2024-11-27T02:19:23.013Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbvhjhjfgk2j
bafyreiahyy3dnqtn44fz2wyhykhi3patz6a5ferfb6derutdsv7rclglyy
<start>Anyhow, if you want this place to actually be better than Twitter, stop quote-dunking and actually engage in discussion. It doesn't bother me (I have been on social media a long time) but it collectively lowers the quality of the entire site & leads to nutpicking and polarization. @emollick.bsky.social
Anyhow, if you want this place to actually be better than Twitter, stop quote-dunking and actually engage in discussion. It doesn't bother me (I have been on social media a long time) but it collectively lowers the quality of the entire site & leads to nutpicking and polarization.
0
2024-11-27T02:03:22.779Z
2024-11-27T02:03:22.926Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbvhe6hj7s2j
bafyreieepn7y63otaevi223n4q5mkw4n77fskssiy5eofx6ofuwiamgs2e
<start>The paper shows people THINK they are as good at detecting sarcasm in written communication as in conversation, but when tested, they were actually barely better at detecting sarcasm than flipping a coin: t.co/HEImDTAKM8 And the lesson for me is all social media has roughly the same dynamics. Anyhow, if you want this place to actually be better than Twitter, stop quote-dunking and actually engage in discussion. It doesn't bother me (I have been on social media a long time) but it collectively lowers the quality of the entire site & leads to nutpicking and polarization. @emollick.bsky.social
The paper shows people THINK they are as good at detecting sarcasm in written communication as in conversation, but when tested, they were actually barely better at detecting sarcasm than flipping a coin: t.co/HEImDTAKM8 And the lesson for me is all social media has roughly the same dynamics.
1
2024-11-27T02:00:25.508Z
2024-11-27T02:00:25.818Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbvhaxozrk2j
bafyreihqzgbql65o4qlditooqggsay6bt4wotdhdgr354ug3u6juofnxtm
<start>Deleted my first Bluesky post because it escaped containment (it was on how if you want to be remembered, write a lot for future humans... and LLMs... to read) A universal lesson is that irony and light sarcasm do not travel well in text form (see paper), and dunk-tweets are often bad communication The paper shows people THINK they are as good at detecting sarcasm in written communication as in conversation, but when tested, they were actually barely better at detecting sarcasm than flipping a coin: t.co/HEImDTAKM8 And the lesson for me is all social media has roughly the same dynamics. @emollick.bsky.social
Deleted my first Bluesky post because it escaped containment (it was on how if you want to be remembered, write a lot for future humans... and LLMs... to read) A universal lesson is that irony and light sarcasm do not travel well in text form (see paper), and dunk-tweets are often bad communication
1
2024-11-27T01:58:37.751Z
2024-11-27T01:58:39.123Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbusvmxa2c2j
bafyreihc2qpqwkuazrn7mpvwjkivuep5iyr4xplaff6vyla4zyqn4yj7t4
<start>A number of the Best Books of the Year lists are out & what a way to end the year! My book, Co-intelligence was named one of the Best of 2024 by the FT (www.ft.com/content/f3e9...), and The Information (www.theinformation.com/articles/bes...), and The Economist (www.economist.com/culture/2024...). You can get it on Amazon or a bookstore! a.co/d/2nytZ0M @emollick.bsky.social
A number of the Best Books of the Year lists are out & what a way to end the year! My book, Co-intelligence was named one of the Best of 2024 by the FT (www.ft.com/content/f3e9...), and The Information (www.theinformation.com/articles/bes...), and The Economist (www.economist.com/culture/2024...).
1
2024-11-26T19:54:22.549Z
2024-11-26T19:54:25.831Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbuseq4fos2j
bafyreiel4eenfpmp75zt44i7cdbgkpuopgra7vjo25j5hmtbhk2zlm35iu
<start>The leaked Sora videos are well beyond Runway and even the major Chinese models like Kling (I have made hundreds of AI videos, so have some experience) - they aren't perfect but very impressive It is an order of magnitude better if these are representative of first attempts, and it appears most are @emollick.bsky.social
The leaked Sora videos are well beyond Runway and even the major Chinese models like Kling (I have made hundreds of AI videos, so have some experience) - they aren't perfect but very impressive It is an order of magnitude better if these are representative of first attempts, and it appears most are
0
2024-11-26T19:44:55.440Z
2024-11-26T19:44:57.413Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbuo5yh37c2j
bafyreid2jfcxrtna5xlitrp74tqwxc5wakfrra5gmzmbym2a6kcyho2vny
<start>What if Clippy, but good? I asked Claude with computer use to watch me try to download an academic paper through the web and then give me advice on how to do it better & do it itself. Then I asked it to help me understand how to take good notes on a scientific paper & do that. Pretty sophisticated @emollick.bsky.social
What if Clippy, but good? I asked Claude with computer use to watch me try to download an academic paper through the web and then give me advice on how to do it better & do it itself. Then I asked it to help me understand how to take good notes on a scientific paper & do that. Pretty sophisticated
0
2024-11-26T18:29:34.325Z
2024-11-26T18:29:38.417Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbta4bjiwc2f
bafyreihmuixfdsbvg3zquqanrbwlosouk3mstfatu5pycmyxlerj5v3ec4
<start>If progress continues, the ability to figure out the AI frontier will slip from most of us. For example, I am not a good enough musician or a skilled enough critical listener to know if Suno v4 is actually as good as it sounds to me. I need to defer to experts (is it?) suno.com/song/6a93b06... An expert view… @emollick.bsky.social
If progress continues, the ability to figure out the AI frontier will slip from most of us. For example, I am not a good enough musician or a skilled enough critical listener to know if Suno v4 is actually as good as it sounds to me. I need to defer to experts (is it?) suno.com/song/6a93b06...
1
2024-11-26T04:45:24.613Z
2024-11-26T04:45:26.124Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbt5eu3sjc2j
bafyreibbu6ilib6fcgig3av6hpiztgdxbeeqeuzjmz3avztfdpdv3gvdqa
<start>"Claude, please create a Buzzfeed from a world that is just like our own, except that Howard Taft returned from the dead and rules the country from the TaftDome" Yes, that is the only prompt. The final two paragraphs after the list made me laugh. @emollick.bsky.social
"Claude, please create a Buzzfeed from a world that is just like our own, except that Howard Taft returned from the dead and rules the country from the TaftDome" Yes, that is the only prompt. The final two paragraphs after the list made me laugh.
0
2024-11-26T03:56:31.296Z
2024-11-26T03:56:33.517Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbsyyu3yts2f
bafyreigv5pbx5dzg7g5pjlt6vusqekiil22ubva32bmcy2x3rn6kgan5ra
<start>A blindspot for AI reasoning engines like o1 is that they all appear to be trained on very traditional deductive problem solving for chain of thought What would a model trained on induction or abduction do? What about one trained on free association? Expert heuristics? Randomized exquisite corpse? @emollick.bsky.social
A blindspot for AI reasoning engines like o1 is that they all appear to be trained on very traditional deductive problem solving for chain of thought What would a model trained on induction or abduction do? What about one trained on free association? Expert heuristics? Randomized exquisite corpse?
0
2024-11-26T02:38:13.681Z
2024-11-26T02:38:16.025Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbsh6ass422b
bafyreibm7qktt4nioypt27ucynhczummfrneqjignfevrypbd7iqzt63qi
<start>During most of the vast history of humanity, technologies have evolved gradually, not through leaps of genius, but slow cultural adaption. Before the scientific method, easy communication & printed books, innovation happened at the level of societies, not people. www2.psych.ubc.ca/~henrich/pdf... @emollick.bsky.social
During most of the vast history of humanity, technologies have evolved gradually, not through leaps of genius, but slow cultural adaption. Before the scientific method, easy communication & printed books, innovation happened at the level of societies, not people. www2.psych.ubc.ca/~henrich/pdf...
0
2024-11-25T21:19:07.430Z
2024-11-25T21:19:20.029Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbsflgqrac2b
bafyreifrp7nlreqiw5xraexsv34coiwvjgsnpdqoqdoojkwr6kqkbfs6ou
<start>I find it amusing that the emerging standard for giving an LLM the ability to work with your technology is just a text file explaining clearly how your technology works (Once folks realize they also need to sell the LLMs on why they should use a technology; things will get wild) llmstxt.org @emollick.bsky.social
I find it amusing that the emerging standard for giving an LLM the ability to work with your technology is just a text file explaining clearly how your technology works (Once folks realize they also need to sell the LLMs on why they should use a technology; things will get wild) llmstxt.org
0
2024-11-25T20:50:42.382Z
2024-11-25T20:50:56.318Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbsczm3lhs2z
bafyreifv2kc35cvollhs7nsscz7oxsufuiwi3rmb3dp5eknlbwsbxvqsum
<start>Of course prompt engineering (especially iteration) is essential for repeated or scaled use. It just isnt that valuable for most people most of the time. www.oneusefulthing.org/p/getting-st... @emollick.bsky.social
Of course prompt engineering (especially iteration) is essential for repeated or scaled use. It just isnt that valuable for most people most of the time. www.oneusefulthing.org/p/getting-st...
0
2024-11-25T20:04:56.515Z
2024-11-25T20:05:00.028Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbs6q7a7lk27
bafyreih5mdrvo4fctvoe3xxdpwb76g6rx23yhomdjyhc6xfbzvtuvkw5zm
<start>I would be pushing for more people to learn formal prompt engineering if we actually had a science for prompting that was consistent, not weird, and did not involve complex psychological guesswork about what quasi-intelligent machines trained on all of human language might like to do LLMs are weird Of course prompt engineering (especially iteration) is essential for repeated or scaled use. It just isnt that valuable for most people most of the time. www.oneusefulthing.org/p/getting-st... @emollick.bsky.social
I would be pushing for more people to learn formal prompt engineering if we actually had a science for prompting that was consistent, not weird, and did not involve complex psychological guesswork about what quasi-intelligent machines trained on all of human language might like to do LLMs are weird
1
2024-11-25T18:48:06.078Z
2024-11-25T18:48:08.311Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbpy6res5s2m
bafyreicnziskcdoxqhl2dgzonwpr46goggjrsbpaqipegtsqf4l6o2irxq
<start>"Claude, One Hundred Years of Solitude as a video game, via an artifact. do it!" Interestingly it defaults to creating a time-loop game, and I have to point out to it that the themes of the book are very much against the concept of time loops. Then it sketches a thematic playable demo. @emollick.bsky.social
"Claude, One Hundred Years of Solitude as a video game, via an artifact. do it!" Interestingly it defaults to creating a time-loop game, and I have to point out to it that the themes of the book are very much against the concept of time loops. Then it sketches a thematic playable demo.
0
2024-11-24T21:45:39.196Z
2024-11-24T21:45:31.123Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbpea546vk23
bafyreielkvm5mfvwlyy6fjvja4dcvuuggnqnmmzh4uyrsgwwsfsbyqcws4
<start>People make prompting too hard. Most people would benefit from just using good enough prompting techniques. I outline some basic rules, including why “treat AI like an intern” is no longer the advice I would give new AI users. www.oneusefulthing.org/p/getting-st... @emollick.bsky.social
People make prompting too hard. Most people would benefit from just using good enough prompting techniques. I outline some basic rules, including why “treat AI like an intern” is no longer the advice I would give new AI users. www.oneusefulthing.org/p/getting-st...
0
2024-11-24T15:48:30.214Z
2024-11-24T15:48:34.819Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbnzhnjrns2r
bafyreihjewnhwk6xfre3py6iujhbbuqopo26li2far2lssvte4b4bljefm
<start>6) Don't be a jerk. Think twice before quoting to dunk on people 7) Emotional contagion has support in the literature. You don't need to keep the chain of bad feelings going 8) BlueSky people do not represent real-life views, don't take it too seriously 9) You cannot judge real-life consensus here @emollick.bsky.social
6) Don't be a jerk. Think twice before quoting to dunk on people 7) Emotional contagion has support in the literature. You don't need to keep the chain of bad feelings going 8) BlueSky people do not represent real-life views, don't take it too seriously 9) You cannot judge real-life consensus here
0
2024-11-24T03:03:11.409Z
2024-11-24T03:03:12.012Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbnzhnjgw22r
bafyreicwhs2azltpele4wihehxb7sagxr2bdbt7pztiai7qy3hwoje5sxe
<start>My advice on staying sane ok BlueSky is the same as on X: 1) You don't have to weigh in on anything you don't want to (or don't know anything about) 2) You should block more 3) You don't need to share your real life 4) Delete a lot of drafts 5) You can delete posts people take the wrong way 1/ 6) Don't be a jerk. Think twice before quoting to dunk on people 7) Emotional contagion has support in the literature. You don't need to keep the chain of bad feelings going 8) BlueSky people do not represent real-life views, don't take it too seriously 9) You cannot judge real-life consensus here @emollick.bsky.social
My advice on staying sane ok BlueSky is the same as on X: 1) You don't have to weigh in on anything you don't want to (or don't know anything about) 2) You should block more 3) You don't need to share your real life 4) Delete a lot of drafts 5) You can delete posts people take the wrong way 1/
1
2024-11-24T03:03:11.408Z
2024-11-24T03:03:11.727Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbncxdyg7k2z
bafyreicflnvaerfd6fnjkqgrcfdbbktmruqbtaznewivl7zfepmdnvuntm
<start>Fascinating: In 2-hour sprints, AI agents outperform human experts at ML engineering tasks like optimizing GPU kernel. But humans pull ahead over longer periods - scoring 2x better at 32 hours. AI is faster but struggles with creative, long-term problem solving (for now?). metr.org/blog/2024-11... @emollick.bsky.social
Fascinating: In 2-hour sprints, AI agents outperform human experts at ML engineering tasks like optimizing GPU kernel. But humans pull ahead over longer periods - scoring 2x better at 32 hours. AI is faster but struggles with creative, long-term problem solving (for now?). metr.org/blog/2024-11...
0
2024-11-23T20:20:22.221Z
2024-11-23T20:20:31.520Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lblnwpcux22s
bafyreibf6f6zd6khsu5zvoep7jr5f42cnxct2pqzhs4obooxi3r57ohfq4
<start>This may sound odd, but game-based benchmarks are some of the most useful for AI, since we have human scores and they require reasoning, planning & vision The hardest of all is Nethack. No AI is close, and I suspect that an AI that can fairly win/ascend would need to be AGI-ish. Paper: balrogai.com @emollick.bsky.social
This may sound odd, but game-based benchmarks are some of the most useful for AI, since we have human scores and they require reasoning, planning & vision The hardest of all is Nethack. No AI is close, and I suspect that an AI that can fairly win/ascend would need to be AGI-ish. Paper: balrogai.com
0
2024-11-23T04:31:32.227Z
2024-11-23T04:32:02.511Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lblh4rdrc22s
bafyreidbrx2y2zryun2wkpxswbodfnsxyq26yvmn3di3nc2lnuqkga5d2m
<start>arxiv.org/pdf/2409.15981 arxiv.org/pdf/2409.090... papers.ssrn.com/sol3/papers.... arxiv.org/ftp/arxiv/pa... @emollick.bsky.social
arxiv.org/pdf/2409.15981 arxiv.org/pdf/2409.090... papers.ssrn.com/sol3/papers.... arxiv.org/ftp/arxiv/pa...
0
2024-11-23T02:29:29.175Z
2024-11-23T02:29:40.541Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lblh4hj4wk2s
bafyreiasv5kazaw3rtwghxymctmr5qynh73rmlyixg2qskmsyvdsx226b4
<start>AI can help learning... when it isn't a crutch. There are now multiple controlled experiments showing that students who use AI to get answers to problems hurts learning (even though they think they are learning), but that students who use well-promoted LLMs as a tutor perform better on tests. arxiv.org/pdf/2409.15981 arxiv.org/pdf/2409.090... papers.ssrn.com/sol3/papers.... arxiv.org/ftp/arxiv/pa... @emollick.bsky.social
AI can help learning... when it isn't a crutch. There are now multiple controlled experiments showing that students who use AI to get answers to problems hurts learning (even though they think they are learning), but that students who use well-promoted LLMs as a tutor perform better on tests.
1
2024-11-23T02:29:29.174Z
2024-11-23T02:29:40.319Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lblfozrc622s
bafyreiesqgosre664n3ffkkxeeiy2vfxddemhflh7ojl6slayijdeslku4
<start>Quick, easy education does not lead to better learning. So we should not use AI for quick, easy education. @emollick.bsky.social
Quick, easy education does not lead to better learning. So we should not use AI for quick, easy education.
0
2024-11-23T02:04:04.816Z
2024-11-23T02:04:05.514Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbleekuaok2v
bafyreifz2uwqad3xytebdvmyykubo4dli6pn5i36l7ack5f7gjmpc7qqm4
<start>I don’t think that is whats happening here. You may want to read the survey report. @emollick.bsky.social
I don’t think that is whats happening here. You may want to read the survey report.
0
2024-11-23T01:40:19.899Z
2024-11-23T01:40:20.123Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lblakvg5uk2r
bafyreiexy26l4ysbtgyyfxby2hqi7zpzpmpq6mw6w4ehquvq4p45pozfom
<start>Easy to get the wrong impression around here, but when you actually survey students, teachers, and parents they love AI. In the survey, it is people who never used it who don’t like it. www.waltonfamilyfoundation.org/learning/the... @emollick.bsky.social
Easy to get the wrong impression around here, but when you actually survey students, teachers, and parents they love AI. In the survey, it is people who never used it who don’t like it. www.waltonfamilyfoundation.org/learning/the...
0
2024-11-23T00:32:17.329Z
2024-11-23T00:32:30.315Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbktbur4ms2c
bafyreieu2eivkv3gw5332lo2ssg6sobvaalzwek5uupec6a64ydxnpdiqm
<start>Its scary to know that I am always only a click away from destroying my productivity. @emollick.bsky.social
Its scary to know that I am always only a click away from destroying my productivity.
0
2024-11-22T20:34:36.006Z
2024-11-22T20:34:58.724Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbkkg2gkxk2c
bafyreidqq3bsbs4iqvjo3lenocgh7jiaoeudu5wxm75twtjotxbgoxvvpu
<start>Organizations (including university labs) should stockpile problems they could use help on from the new set of agentic & o1-like systems that are coming soon. I suspect that they will be less useful to non-experts at first, but open up some new opportunities for complex work that AIs can do. @emollick.bsky.social
Organizations (including university labs) should stockpile problems they could use help on from the new set of agentic & o1-like systems that are coming soon. I suspect that they will be less useful to non-experts at first, but open up some new opportunities for complex work that AIs can do.
0
2024-11-22T17:55:52.495Z
2024-11-22T17:56:12.724Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbkdled2j22c
bafyreickpy224pm4g5e3rfaj2xbkorbm3urddy2jm7k7jac4q3bnhajcwq
<start>"can you do a image-based one like those miserable click on an image or manipulate an image one?" @emollick.bsky.social
"can you do a image-based one like those miserable click on an image or manipulate an image one?"
0
2024-11-22T15:53:34.445Z
2024-11-22T15:53:55.827Z
emollick.bsky.social
did:plc:flxq4uyjfotciovpw3x3fxnu
at://did:plc:flxq4uyjfotciovpw3x3fxnu/app.bsky.feed.post/3lbjcthojpc2c
bafyreicr4bqnfamomantakar7re5z2vxtonmxbwe5ywnxx32qwwakpgguu
<start>"Claude, create the worlds most annoying CAPTCHA" "Make it more annoying" "Make it REALLY annoying" (apparently this is all technically solvable) "can you do a image-based one like those miserable click on an image or manipulate an image one?" @emollick.bsky.social
"Claude, create the worlds most annoying CAPTCHA" "Make it more annoying" "Make it REALLY annoying" (apparently this is all technically solvable)
1
2024-11-22T06:07:32.922Z
2024-11-22T06:07:52.717Z