autocomplete
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
autocomplete [2023/08/04 17:00] – created alexander01998 | autocomplete [2024/10/08 08:35] (current) – alexander01998 | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== AutoComplete ====== | ====== AutoComplete ====== | ||
- | |||
- | <WRAP 516px> | ||
- | ^ AutoComplete | ||
- | |{{ hack: | ||
- | ^Type|Hack| | ||
- | ^Category|[[: | ||
- | ^In-game description|" | ||
- | ^[[: | ||
- | ^Source code|[[w7src> | ||
- | |::: | ||
- | </ | ||
AutoComplete is a Minecraft hack that generates auto-completions for the user's chat messages, using large language models like GPT-3, GPT-4 and LLaMA. | AutoComplete is a Minecraft hack that generates auto-completions for the user's chat messages, using large language models like GPT-3, GPT-4 and LLaMA. | ||
===== Settings ===== | ===== Settings ===== | ||
- | |||
- | ==== API provider ==== | ||
- | ^ API provider | ||
- | ^Type|Enum| | ||
- | ^In-game description|" | ||
- | ^Default value|oobabooga| | ||
- | ^Possible values|OpenAI, | ||
- | |||
- | The "API provider" | ||
==== OpenAI model ==== | ==== OpenAI model ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Enum| | + | |NAME=OpenAI model |
- | ^In-game description|"The model to use for OpenAI API calls.\\ \\ **Text-Davinci-003** (better known as GPT-3) | + | |DESCRIPTION=" |
- | ^Default value|gpt-3.5-turbo| | + | |DEFAULT=gpt-4o-2024-08-06 |
- | ^Possible values|gpt-3.5-turbo, gpt-3.5-turbo-0613, gpt-3.5-turbo-0301, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, gpt-4, gpt-4-0613, gpt-4-0314, gpt-4-32k, gpt-4-32k-0613, text-davinci-003, text-davinci-002, text-davinci-001, davinci, text-curie-001, curie, text-babbage-001, babbage, text-ada-001, | + | |VALUES=gpt-4o-2024-08-06, gpt-4o-2024-05-13, gpt-4o-mini-2024-07-18, gpt-4-turbo-2024-04-09, gpt-4-0125-preview, gpt-4-1106-preview, gpt-4-0613, gpt-3.5-turbo-0125, gpt-3.5-turbo-1106, gpt-3.5-turbo-instruct, davinci-002, babbage-002 |
+ | }} | ||
==== Max tokens ==== | ==== Max tokens ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Slider| | + | |NAME=Max tokens |
- | ^In-game description|"The maximum number of tokens that the model can generate.\\ \\ Higher values allow the model to predict longer chat messages, but also increase the time it takes to generate predictions.\\ \\ The default value of 16 is fine for most use cases." | + | |DESCRIPTION=" |
- | ^Default value|16 tokens| | + | |DEFAULT=16 tokens |
- | ^Minimum|1 token| | + | |MIN=1 token |
- | ^Maximum|100 tokens| | + | |MAX=100 tokens |
- | ^Increment|1 token| | + | |INCREMENT=1 token |
+ | }} | ||
==== Temperature ==== | ==== Temperature ==== | ||
- | ^ Temperature | + | {{template>: |
- | ^Type|Slider| | + | |NAME=Temperature |
- | ^In-game description|" | + | |DESCRIPTION=" |
- | ^Default value|0.7| | + | |DEFAULT=1 |
- | ^Minimum|0| | + | |MIN=0 |
- | ^Maximum|2| | + | |MAX=2 |
- | ^Increment|0.01| | + | |INCREMENT=0.01 |
+ | }} | ||
Note: Temperature values above 1 will cause most language models to generate complete nonsense and should only be used for comedic effect. | Note: Temperature values above 1 will cause most language models to generate complete nonsense and should only be used for comedic effect. | ||
==== Top P ==== | ==== Top P ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Slider| | + | |NAME=Top P |
- | ^In-game description|"An alternative to temperature. Makes the model less random by only letting it choose from the most likely tokens.\\ \\ A value of 100% disables this feature by letting the model choose from all tokens." | + | |DESCRIPTION=" |
- | ^Default value|100%| | + | |DEFAULT=100% |
- | ^Minimum|0% | + | |MIN=0% |
- | ^Maximum|100%| | + | |MAX=100% |
- | ^Increment|1%| | + | |INCREMENT=1% |
+ | }} | ||
==== Presence penalty ==== | ==== Presence penalty ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Slider| | + | |NAME=Presence penalty |
- | ^In-game description|" | + | |DESCRIPTION=" |
- | ^Default value|0| | + | |DEFAULT=0 |
- | ^Minimum|-2| | + | |MIN=-2 |
- | ^Maximum|2| | + | |MAX=2 |
- | ^Increment|0.01| | + | |INCREMENT=0.01 |
+ | }} | ||
==== Frequency penalty ==== | ==== Frequency penalty ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Slider| | + | |NAME=Frequency penalty |
- | ^In-game description|" | + | |DESCRIPTION=" |
- | ^Default value|0.6| | + | |DEFAULT=0 |
- | ^Minimum|-2| | + | |MIN=-2 |
- | ^Maximum|2| | + | |MAX=2 |
- | ^Increment|0.01| | + | |INCREMENT=0.01 |
- | + | }} | |
- | ==== Repetition penalty ==== | + | |
- | ^ Repetition penalty | + | |
- | ^Type|Slider| | + | |
- | ^In-game description|" | + | |
- | ^Default value|1| | + | |
- | ^Minimum|1| | + | |
- | ^Maximum|1.5| | + | |
- | ^Increment|0.01| | + | |
- | + | ||
- | ==== Encoder repetition penalty ==== | + | |
- | ^ Encoder repetition penalty | + | |
- | ^Type|Slider| | + | |
- | ^In-game description|" | + | |
- | ^Default value|1| | + | |
- | ^Minimum|0.8| | + | |
- | ^Maximum|1.5| | + | |
- | ^Increment|0.01| | + | |
==== Stop sequence ==== | ==== Stop sequence ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Enum| | + | |NAME=Stop sequence |
- | ^In-game description|" | + | |DESCRIPTION=" |
- | ^Default value|Line Break| | + | |DEFAULT=Line Break |
- | ^Possible values|Line Break, Next Message| | + | |VALUES=Line Break, Next Message |
+ | }} | ||
Note: " | Note: " | ||
==== Context length ==== | ==== Context length ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Slider| | + | |NAME=Context length |
- | ^In-game description|" | + | |DESCRIPTION=" |
- | ^Default value|10 messages| | + | |DEFAULT=10 messages |
- | ^Minimum|0 (unlimited)| | + | |MIN=0 (unlimited) |
- | ^Maximum|100 messages| | + | |MAX=100 messages |
- | ^Increment|1 message| | + | |INCREMENT=1 message |
+ | }} | ||
==== Filter server messages ==== | ==== Filter server messages ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Checkbox| | + | |NAME=Filter server messages |
- | ^In-game description|"Only shows player-made chat messages to the model.\\ \\ This can help you save tokens and get more out of a low context length, but it also means that the model will have no idea about events like players joining, leaving, dying, etc."| | + | |DESCRIPTION=" |
- | ^Default value|not checked| | + | |DEFAULT=not checked |
+ | }} | ||
+ | |||
+ | ==== Custom model ==== | ||
+ | {{template>: | ||
+ | |NAME=Custom model | ||
+ | |DESCRIPTION="" | ||
+ | |DEFAULT=(empty) | ||
+ | }} | ||
+ | |||
+ | ==== Custom model type ==== | ||
+ | {{template>: | ||
+ | |NAME=Custom model type | ||
+ | |DESCRIPTION="" | ||
+ | |DEFAULT=Chat | ||
+ | |VALUES=Chat, | ||
+ | }} | ||
==== OpenAI chat endpoint ==== | ==== OpenAI chat endpoint ==== | ||
- | ^ | + | {{template>: |
- | ^Type|TextField| | + | |NAME=OpenAI chat endpoint |
- | ^In-game description|" | + | |DESCRIPTION=" |
- | ^Default value|'' | + | |DEFAULT='' |
+ | }} | ||
The " | The " | ||
Line 132: | Line 119: | ||
==== OpenAI legacy endpoint ==== | ==== OpenAI legacy endpoint ==== | ||
- | ^ | + | {{template>: |
- | ^Type|TextField| | + | |NAME=OpenAI legacy endpoint |
- | ^In-game description|" | + | |DESCRIPTION=" |
- | ^Default value|'' | + | |DEFAULT='' |
- | + | }} | |
- | The " | + | |
- | + | ||
- | It may also be useful for Microsoft Azure customers who have their own endpoint, but this has not been tested yet. There are subtle differences in the Azure version of the API, so it's possible that it won't work with AutoComplete. | + | |
- | + | ||
- | ==== Oobabooga endpoint ==== | + | |
- | ^ Oobabooga endpoint | + | |
- | ^Type|TextField| | + | |
- | ^In-game description|" | + | |
- | ^Default value|'' | + | |
- | + | ||
- | The " | + | |
- | + | ||
- | By running the Oobabooga web UI on a server, rented from a specialized AI hosting provider, it's possible to use much more powerful language models that would not be possible to run on a gaming computer. | + | |
==== Max suggestions per draft ==== | ==== Max suggestions per draft ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Slider| | + | |NAME=Max suggestions per draft |
- | ^In-game description|"How many suggestions the AI is allowed to generate for the same draft message.\\ \\ <color # | + | |DESCRIPTION=" |
- | ^Default value|3| | + | |DEFAULT=3 |
- | ^Minimum|1| | + | |MIN=1 |
- | ^Maximum|10| | + | |MAX=10 |
- | ^Increment|1| | + | |INCREMENT=1 |
+ | }} | ||
The "Max suggestions per draft" setting controls how many different suggestions the AI will try to generate for the same draft message. Higher values will result in more suggestions, | The "Max suggestions per draft" setting controls how many different suggestions the AI will try to generate for the same draft message. Higher values will result in more suggestions, | ||
Line 165: | Line 140: | ||
==== Max suggestions kept ==== | ==== Max suggestions kept ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Slider| | + | |NAME=Max suggestions kept |
- | ^In-game description|" | + | |DESCRIPTION=" |
- | ^Default value|100 messages| | + | |DEFAULT=100 messages |
- | ^Minimum|10 messages| | + | |MIN=10 messages |
- | ^Maximum|1000 messages| | + | |MAX=1000 messages |
- | ^Increment|10 messages| | + | |INCREMENT=10 messages |
+ | }} | ||
The "Max suggestions kept" setting only controls at what point old suggestions are deleted from memory. Higher values don't use any additional tokens and only consume a tiny amount of RAM. This is why the range of values is so much higher than for the other settings. | The "Max suggestions kept" setting only controls at what point old suggestions are deleted from memory. Higher values don't use any additional tokens and only consume a tiny amount of RAM. This is why the range of values is so much higher than for the other settings. | ||
==== Max suggestions shown ==== | ==== Max suggestions shown ==== | ||
- | ^ | + | {{template>: |
- | ^Type|Slider| | + | |NAME=Max suggestions shown |
- | ^In-game description|"How many suggestions can be shown above the chat box.\\ \\ If this is set too high, the suggestions will obscure some of the existing chat messages. How high you can set this depends on your screen resolution and GUI scale." | + | |DESCRIPTION=" |
- | ^Default value|5| | + | |DEFAULT=5 |
- | ^Minimum|1| | + | |MIN=1 |
- | ^Maximum|10| | + | |MAX=10 |
- | ^Increment|1| | + | |INCREMENT=1 |
+ | }} | ||
The "Max suggestions shown" setting controls how many suggestions can be shown at once on the screen. Depending on the user's screen resolution and GUI scale, higher values may cause the suggestions to cover up other parts of the UI. | The "Max suggestions shown" setting controls how many suggestions can be shown at once on the screen. Depending on the user's screen resolution and GUI scale, higher values may cause the suggestions to cover up other parts of the UI. | ||
Line 189: | Line 166: | ||
===== Changes ===== | ===== Changes ===== | ||
- | ^Version^Changes^ | + | {{template> |
- | |[[update:Wurst 7.33]]|Added AutoComplete.| | + | |
- | |[[update:Wurst 7.35]]|Fixed the description of AutoComplete' | + | |
- | |[[update: | + | |
- | |::: | + | |
{{tag> | {{tag> | ||
+ | |||
---- struct data ---- | ---- struct data ---- | ||
+ | hack.name | ||
+ | hack.image | ||
+ | hack.category | ||
+ | hack.in-game description : Auto-completes your chat messages using large language models. Requires an OpenAI account with API access or any other language model API that is OpenAI-compatible. | ||
+ | hack.default keybind : none | ||
+ | hack.source code : net/ | ||
---- | ---- | ||
autocomplete.1691168422.txt.gz · Last modified: 2023/08/04 17:00 by alexander01998