Reka Releases Reka Flash, a Highly Capable Multimodal Model

In the ever-evolving landscape of AI, Reka is setting a new standard with the unveiling of Reka Flash, an exceptional multimodal and multilingual model designed for efficiency and speed. Emerging as a “turbo-class” contender, the 21-billion parameter powerhouse, Reka Flash, has been meticulously trained from the ground up to push the boundaries of AI capabilities. It stands out in the marketplace with its ability to rival the performance of much larger contemporaries, striking a formidable balance between agility and quality. This makes it an ideal solution for demanding applications that necessitate rapid processing without compromising on output excellence.

As Reka solidifies its position in the high-performance AI arena, Reka Edge offers a compact alternative. With a 7-billion parameter construct, it’s tailored for environments where efficiency is paramount. Whether deployed on devices or utilized locally, Reka Edge promises to deliver robust AI capabilities without the heft of its larger counterparts.

Available for exploration in the Reka Playground through a public beta, Reka Flash and Reka Edge are poised to redefine what’s possible in the intersection of language comprehension and visual perception. And for those looking to push the envelope even further, Reka teases the arrival of its most ambitious project yet, Reka Core, set to launch in the coming weeks.

Overview of Reka’s new AI model

As per their benchmarks, he models include Reka Flash, Gemini Pro, GPT-3.5, Grok-1, Mixtral 45B, Llama-2, GPT-4, and Gemini Ultra. The benchmarks include MMLU, GSM8K, HumanEval, and GPQA.

Here are some of the key things you can tell from the benchmark:

  • Reka Flash performs well on all four benchmarks, but it is not the best model on any of them.
  • Reka Flash is a relatively small model (21B parameters), but it is able to achieve competitive performance with much larger models.
  • The best model on a particular benchmark depends on the specific task that the model is being used for.

Overall, their results shows that their model is pretty powerful for its size.

Reka Multimodal Capabilities

Reka Flash performs well across the board on the listed benchmarks. It’s also worth noting that this table only shows a small sample of benchmarks. There are many other factors to consider when evaluating a language model, such as its training data, its architecture, and its computational efficiency.

Testing The Model

First let’s start off by giving it a simple coding question.

Ok not bad. Not let’s ask it a hard question.

This question was pulled from Leetcode 2751 Robot Collisions. Notice how I didn’t mention Leetcode or the question tile in the prompt? I did this so we can try and make sure there the question wasn’t seen in its training data by chance. I also tried to pick a relatively newer question, so the chances of it being in its data were even less. Nonethless, here is the result we got. It seemed to have gotten the correct parameters, it just has a different name function and no return types. hWich makes sense, considering we just asked the raw question.

I will post the rest of the answer here in case you want to copy it:

def survivingRobots(positions, healths, directions):
    i = 0
    while i < len(positions) - 1:
        j = i + 1
        while j < len(positions):
            if directions[i] == 'L' and positions[i] == positions[j]:
                if healths[i] < healths[j]:
                    healths[i] -= 1
                elif healths[i] > healths[j]:
                    healths[j] = 0
                else:
                    healths[i] = 0
                    healths[j] = 0
                j += 1
            elif directions[i] == 'R' and positions[i] == positions[j]:
                if healths[i] < healths[j]:
                    healths[j] -= 1
                elif healths[i] > healths[j]:
                    healths[i] = 0
                else:
                    healths[i] = 0
                    healths[j] = 0
                j += 1
            else:
                i += 1
                j += 1
    return [h for h in healths if h > 0]

Unfortunately when it was ran it only passed 128 / 2433 testcases. Albeit it’s a hard question and most language models probably wouldn’t get it first shot.

Testing Reka Flash’s v1.0

The result:

<table>
 <thead>
  <tr>
   <th>IdNum</th>
   <th>LName</th>
   <th>FName</th>
   <th>JobCode</th>
   <th>Salary</th>
  </tr>
 </thead>
 <tbody>
  <tr>
   <td>1876</td>
   <td>CHIN</td>
   <td>JACK</td>
   <td>TA1</td>
   <td>$44,096</td>
  </tr>
  <tr>
   <td>1114</td>
   <td>GREENWALD</td>
   <td>JANICE</td>
   <td>ME3</td>
   <td>$38,950</td>
  </tr>
  <tr>
   <td>1556</td>
   <td>PENNINGTON</td>
   <td>MICHAEL</td>
   <td>ME1</td>
   <td>$31,054</td>
  </tr>
  <tr>
   <td>1354</td>
   <td>PARKER</td>
   <td>MARY</td>
   <td>FA3</td>
   <td>$67,445</td>
  </tr>
  <tr>
   <td>1130</td>
   <td>WOOD</td>
   <td>DEBORAH</td>
   <td>PT2</td>
   <td>$37,427</td>
  </tr>
 </tbody>
</table>

This was very impressive. Seems to have very good OCR under the hood. Go ahead and test the code yourself and compare it to the table.

Closing Thoughts

The arrival of Reka Flash is indeed a noteworthy leap in the realm of artificial intelligence, presenting itself as a fairly impressive model with considerable potential. As a testament to its capabilities, my initial interaction with the model suggests there’s much to be explored and harnessed within its sophisticated architecture. However, to fully grasp the extent of its prowess, further experimentation and exploration are essential.

While Reka Flash positions itself as a high-caliber model, it’s important to note that this isn’t the pinnacle of Reka’s innovation. The impending release of Reka Core looms on the horizon, teasing the promise of an even more powerful tool in the AI toolkit. Given what we’ve seen from Reka Flash and Reka Edge, expectations are high for what Reka Core will bring to the table.

The anticipation of Reka Core brings about contemplation of Reka’s trajectory among the constellation of companies in the LLM (large language model) space. It’s an arena filled with heavyweights and emerging challengers, each vying to push the boundaries of what’s possible. In such a competitive market, Reka’s strategy and offerings will be crucial factors.

An unfortunate caveat to the excitement around Reka’s models is the lack of availability of their weights. The AI community thrives on shared knowledge and the ability to build upon others’ work; the inaccessible weights mean that some practitioners and researchers will miss out on the chance to delve deeper into the inner workings and potential applications of these models.

As we look towards what’s next, it’s clear that Reka is carving out its own path in the AI landscape. With the balance between efficiency and power in Reka Flash and Reka Edge, coupled with the anticipated launch of Reka Core, there’s a palpable buzz around where this AI company is headed. One thing is certain: the AI community is watching, waiting, and eager to see how Reka’s contributions will shape the future of technology.

Related

Google Announces A Cost Effective Gemini Flash

At Google's I/O event, the company unveiled Gemini Flash,...

WordPress vs Strapi: Choosing the Right CMS for Your Needs

With the growing popularity of headless CMS solutions, developers...

JPA vs. JDBC: Comparing the two DB APIs

Introduction The eternal battle rages on between two warring database...

Meta Introduces V-JEPA

The V-JEPA model, proposed by Yann LeCun, is a...

Mistral Large is Officially Released – Partners With Microsoft

Mistral has finally released their largest model to date,...

Subscribe to our AI newsletter. Get the latest on news, models, open source and trends.
Don't worry, we won't spam. 😎

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

Lusera will use the information you provide on this form to be in touch with you and to provide updates and marketing.