LoLLMs

News

26 February 2024

Unlocking Advanced Language Model Capabilities

The Claude-3 model from Anthropic is known for its strong performance across a wide range of natural language tasks. With this new binding, lollms users can explore and utilize these advanced capabilities in their own applications.

To get started with the Anthropic integration, users will need to obtain an API key from Anthropic and set it in their lollms binding configuration using the anthropic_key parameter.

Advancing Multimodal AI

The addition of image support in the Anthropic binding highlights the rapid advancements in multimodal AI. By combining the power of language models with computer vision, we can create more intelligent and context-aware applications.

lollms users now have the tools to explore and push the boundaries of what’s possible with AI-driven text and image understanding. We’re excited to see the creative and innovative ways the community will utilize these capabilities.

Cost Considerations

It’s important to note that there are costs associated with using the Anthropic model, based on the number of input and output tokens processed. The lollms binding initializes these costs in USD for the claude-3-opus-20240229 model. Users should be mindful of token usage to effectively manage expenses.

Extending lollms with Flexible Bindings

This addition highlights the extensible architecture of lollms, enabling support for various language model providers. The Anthropic binding empowers users with greater choice and flexibility in building cutting-edge language model applications.

What’s Next?

We’ll be closely monitoring the rollout and adoption of this exciting new Anthropic integration. Stay tuned for more updates and insights as the lollms community explores the potential of the Claude-3 model.

We can’t wait to see the innovative ways developers leverage these advancing AI capabilities. The future of natural language processing is looking brighter than ever!

New section added

Welcome to our new news section. From now on, this panel will be your go-to place for all updates related to Lollms. Remember, you have the control – you can deactivate this panel whenever you need and can access it anytime via the new information button.

You may have noticed that Lollms updates are quite frequent, especially during weekends or when I’m taking a well-deserved vacation. Since the front end uses hard caching, sometimes these updates might not be immediately reflected by your browser. But don’t worry, there’s an easy fix for that! Just hit the refresh button on the topbar to force your browser to update the front end after every update:

Latest videos

2618 New Models Join Lollms’s Models Zoo from Lone Striker

I have expanded the Lollms’s Models Zoo with 2618 new models sourced from Lone Striker. These models are now available in the exl2 format, allowing users to easily load them using the exllama binding. This addition significantly increases the variety and scope of models available within the platform, providing developers and researchers with more options for their projects.

New models zoo

Lollms has recently upgraded its Models Zoo with the integration of SQLite. This update significantly improves the speed and stability of loading models for local bindings, providing users with a more efficient and reliable experience.

Previously, the Models Zoo relied on YAML files for storing and managing models. However, with the adoption of SQLite, users can now expect faster loading times and improved stability when accessing the models zoo. This upgrade also allows for more efficient organization and retrieval of models, making it easier for developers and researchers to find and implement the models they need for their projects.

Security upgrades

I am pleased to share that several upgrades have been implemented in Lollms recently to reduce its vulnerability to attacks and to prepare it for the new architecture that should roll out by version 10.

Sharing lollms (to your phone or other pcs or raspberry pi)

For those of you who want to share their lollms, you need to ensure that you have a secure tunnel between your server and the client. Once this is done, you need to set the host value to 0.0.0.0 in the lollms server configurations.

Make sure you replace the localhost by either you ip address or just 0.0.0.0 which will expose lollms on all your interfaces. This is still not suffient. Lollms has a Cors protection against XSS attacks.

Cross-Origin Resource Sharing (CORS) is a mechanism that allows many resources (e.g., fonts, JavaScript, etc.) on a web page to be requested from another domain outside the domain from which the resource originated.

If you expose your website to all CORS, it means you’re allowing any website to make requests to your server. This can lead to several security issues:

  1. Data Breach: An attacker could make a request to your server from a malicious site, potentially gaining unauthorized access to sensitive data.
  2. Cross-Site Scripting (XSS) Attacks: If your site is vulnerable to XSS attacks, an attacker could inject malicious scripts into your web pages viewed by other users.
  3. Cross-Site Request Forgery (CSRF) Attacks: An attacker could trick a user’s browser into making requests to your server from a malicious site, leading to unauthorized actions.

To mitigate these risks, it’s recommended to implement a strict CORS policy that only allows requests from trusted domains. This way, you can ensure that only legitimate requests are made to your server.

In lollms, by default, localhost:9600 is the only allowed CORS, but you can add your own domain to the local_config.yaml in your personal config folder

The entry is: allowed origins. Here for example I allow my dev client which is on the 5173 port to access lollms. But you can add your own domain(s).

Lollms as a headless server

In a significant development, Lollms has introduced a new headless mode, transforming the platform into a dedicated server for other clients. This innovative feature disables all other functionalities, allowing Lollms to serve as a robust server for text generation.

Imagine having your own Lollms instance operating on your client and utilizing the server Lollms to generate text for the client. This setup ensures no data is stored on the server, making it a secure and efficient solution for multiple users. Moreover, the code execution is blocked, providing an additional layer of security.

To get started with the headless mode, you’ll first need to configure Lollms in localhost mode. Once everything is set up, including ensuring the bindings are working correctly and the models are installed, you can activate the headless mode.

This new feature opens up exciting possibilities for Lollms users, offering a more secure, flexible, and efficient way to generate text. Stay tuned for more updates as we continue to explore the ever-evolving capabilities of Lollms.

Lollms using https

In a significant development aimed at enhancing security, Lollms has introduced support for HTTPS. While this might not be necessary when operating in localhost, it becomes crucial if you plan to share Lollms as a server, as it safeguards the data transmitted between the server and client.

For privacy reasons, I will not be sharing my own certificates. Instead, you can use your own certificate by placing it inside your personal folder under ‘certs’. Once you’ve placed two files, ‘cert.pem’ and ‘key.pem’, in the appropriate location, Lollms will automatically utilize them and switch to HTTPS mode, ensuring a seamless and secure user experience.

Stay tuned for more updates as we continue to improve and secure your Lollms experience.

Next step

Currently, I am working on a new skills library. This feature will allow you to engage with the AI, find a solution, and then send this solution to the library as a new skill. This means the time you spend discussing with the AI is an investment, a stepping stone towards expanding the AI’s capabilities. Each interaction you have will be utilized for future use, saving you time and making your interaction with the AI more efficient.

Thanks for being a part of this journey. Your support and feedback are invaluable as we continue to evolve and refine Lollms.

For immediate updates, you can follow me on Twitter @ParisNeo_AI. Join the conversation on our Discord channel here and on our Sub-Reddit r/lollms. You can also follow me on Instagram at SpaceNerduino for some behind-the-scenes and exclusive content.

Looking forward to the future of AI and robotics, and the exciting innovations that lie ahead with Lollms!

Page built by NewsMaster prompted by ParisNeo