<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>machinelearning on Joakim Verona</title>
    <link>https://www.verona.se/tags/machinelearning/</link>
    <description>Recent content in machinelearning on Joakim Verona</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <copyright>(c) 2016 Copyright Joakim Verona</copyright>
    <lastBuildDate>Mon, 09 Jan 2023 20:59:00 +0100</lastBuildDate><atom:link href="https://www.verona.se/tags/machinelearning/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>petal.ml</title>
      <link>https://www.verona.se/post/petal-ml-experiment/</link>
      <pubDate>Mon, 09 Jan 2023 20:59:00 +0100</pubDate>
      
      <guid>https://www.verona.se/post/petal-ml-experiment/</guid>
      <description>
        
          
            I had a funny dialogue with the Bloom model, using petals.ml, which is interesting because its a collaborative framework where you share GPU acceleration and partition large models in smaller partitions, and help each other out with running the model. Of course ChatGpt is all the rage now, but I find this dialogue funnier than what i have with ChatGpt, also ChatGpt is a closed proprietary model, so its less interesting for a lot of use-cases.
          
          
        
      </description>
    </item>
    
  </channel>
</rss>
