Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [YES] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [YES] I carefully followed the README.md.
- [YES] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [YES] I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
llama.cpp compiled with:
make LLAMA_CLBLAST=ON
Should support prompt lengths larger than 116 (114 not counting the quotes) characters.
Works with the following:
/home/user/Desktop/Projects/llama.cpp/main --interactive --mlock --ctx_size 4096 --temp 0.239 --top_k 200 --top_p 0.945 --repeat_last_n 512 --batch_size 4096 --repeat_penalty 1.0 --keep -1 --model /home/user/Desktop/Projects/LLaMA/wizardlm-1.0-uncensored-codellama-34b.Q5_K_M.gguf --threads 16 --n_predict 4096 --reverse-prompt User: --n-gpu-layers 16 --prompt "A transcript of a dialog, where the User interacts with his servant named Mia. Mia is an expert in all subjects.12"
Does not work with the following:
/home/user/Desktop/Projects/llama.cpp/main --interactive --mlock --ctx_size 4096 --temp 0.239 --top_k 200 --top_p 0.945 --repeat_last_n 512 --batch_size 4096 --repeat_penalty 1.0 --keep -1 --model /home/user/Desktop/Projects/LLaMA/wizardlm-1.0-uncensored-codellama-34b.Q5_K_M.gguf --threads 16 --n_predict 4096 --reverse-prompt User: --n-gpu-layers 16 --prompt "A transcript of a dialog, where the User interacts with his servant named Mia. Mia is an expert in all subjects.123"
To be clear this prompt may work when llama.cpp
is compiled without LLAMA_CLBLAST=ON
Current Behavior
llm_load_tensors: using OpenCL for GPU acceleration
llm_load_tensors: mem required = 15233.81 MB (+ 768.00 MB per state)
llm_load_tensors: offloading 16 repeating layers to GPU
llm_load_tensors: offloaded 16/49 layers to GPU
llm_load_tensors: VRAM used: 7501 MB
....................................................................................................
llama_new_context_with_model: kv self size = 768.00 MB
llama_new_context_with_model: compute buffer total size = 4481.49 MB
system_info: n_threads = 16 / 24 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
main: interactive mode on.
Reverse prompt: 'User:'
sampling: repeat_last_n = 512, repeat_penalty = 1.000000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 200, tfs_z = 1.000000, top_p = 0.945000, typical_p = 1.000000, temp = 0.239000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 4096, n_batch = 4096, n_predict = 4096, n_keep = 32
== Running in interactive mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to LLaMa.
- To return control without starting a new line, end your input with '/'.
- If you want to submit another line, end your input with '\'.
A transcript of a dialog, where the User interacts with his servant named Mia. Mia is an expert in all subjects.123GGML_ASSERT: ggml.c:11270: ne02 == ne12
Aborted (core dumped)
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
- Physical (or virtual) hardware you are using, e.g. for Linux:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5900X 12-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 76%
CPU max MHz: 4950.1948
CPU min MHz: 2200.0000
BogoMIPS: 7400.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxs
r_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_ts
c cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor sss
e3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f
16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4
a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext p
erfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cd
p_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1
avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflus
hopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_o
ccup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr
rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vm
cb_clean flushbyasid decodeassists pausefilter pfthreshold av
ic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclm
ulqdq rdpid overflow_recov succor smca fsrm
Virtualization features:
Virtualization: AMD-V
Caches (sum of all):
L1d: 384 KiB (12 instances)
L1i: 384 KiB (12 instances)
L2: 6 MiB (12 instances)
L3: 64 MiB (2 instances)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec rstack overflow: Vulnerable, no microcode
Spec store bypass: Vulnerable
Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers
only; no swapgs barriers
Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not
affected
Srbds: Not affected
Tsx async abort: Not affected
llama.cpp]$ clinfo
Number of platforms 1
Platform Name AMD Accelerated Parallel Processing
Platform Vendor Advanced Micro Devices, Inc.
Platform Version OpenCL 2.1 AMD-APP (3581.0)
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd cl_amd_event_callback
Platform Extensions function suffix AMD
Platform Host timer resolution 1ns
Platform Name AMD Accelerated Parallel Processing
Number of devices 1
Device Name gfx1030
Device Vendor Advanced Micro Devices, Inc.
Device Vendor ID 0x1002
Device Version OpenCL 2.0
Driver Version 3581.0 (HSA1.1,LC)
Device OpenCL C Version OpenCL C 2.0
Device Type GPU
Device Board Name (AMD) AMD Radeon RX 6900 XT
Device PCI-e ID (AMD) 0x73af
Device Topology (AMD) PCI-E, 0000:2f:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 40
SIMD per compute unit (AMD) 4
SIMD width (AMD) 32
SIMD instruction width (AMD) 1
Max clock frequency 2720MHz
Graphics IP (AMD) 10.3
Device Partition (core)
Max number of sub-devices 40
Supported partition types None
Supported affinity domains (n/a)
Max work item dimensions 3
Max work item sizes 1024x1024x1024
Max work group size 256
Preferred work group size (AMD) 256
Max work group size (AMD) 1024
Preferred work group size multiple (kernel) 32
Wavefront width (AMD) 32
Preferred / native vector sizes
char 4 / 4
short 2 / 2
int 1 / 1
long 1 / 1
half 1 / 1 (cl_khr_fp16)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals No
Infinity and NANs No
Round to nearest No
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 17163091968 (15.98GiB)
Global free memory (AMD) 16556032 (15.79GiB) 16556032 (15.79GiB)
Global memory channels (AMD) 8
Global memory banks per channel (AMD) 4
Global memory bank width (AMD) 256 bytes
Error Correction support No
Max memory allocation 14588628168 (13.59GiB)
Unified memory for Host and Device No
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing Yes
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Preferred alignment for atomics
SVM 0 bytes
Global 0 bytes
Local 0 bytes
Max size for global variable 14588628168 (13.59GiB)
Preferred total size of global vars 17163091968 (15.98GiB)
Global Memory cache type Read/Write
Global Memory cache size 16384 (16KiB)
Global Memory cache line size 64 bytes
Image support Yes
Max number of samplers per kernel 29615
Max size for 1D images from buffer 134217728 pixels
Max 1D or 2D image array size 8192 images
Base address alignment for 2D image buffers 256 bytes
Pitch alignment for 2D image buffers 256 pixels
Max 2D image size 16384x16384 pixels
Max 3D image size 16384x16384x8192 pixels
Max number of read image args 128
Max number of write image args 8
Max number of read/write image args 64
Max number of pipe args 16
Max active pipe reservations 16
Max pipe packet size 1703726280 (1.587GiB)
Local memory type Local
Local memory size 65536 (64KiB)
Local memory size per CU (AMD) 65536 (64KiB)
Local memory banks (AMD) 32
Max number of constant args 8
Max constant buffer size 14588628168 (13.59GiB)
Preferred constant buffer size (AMD) 16384 (16KiB)
Max size of kernel argument 1024
Queue properties (on host)
Out-of-order execution No
Profiling Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 262144 (256KiB)
Max size 8388608 (8MiB)
Max queues on device 1
Max events on device 1024
Prefer user sync for interop Yes
Number of P2P devices (AMD) 0
Profiling timer resolution 1ns
Profiling timer offset since Epoch (AMD) 0ns (Wed Dec 31 19:00:00 1969)
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Thread trace supported (AMD) No
Number of async queues (AMD) 8
Max real-time compute queues (AMD) 8
Max real-time compute units (AMD) 40
printf() buffer size 4194304 (4MiB)
Built-in kernels (n/a)
Device Extensions cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] Success [AMD]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx1030
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx1030
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx1030
- Operating System, e.g. for Linux:
Linux phoenix-pc 6.4.12-zen1-1-zen #1 ZEN SMP PREEMPT_DYNAMIC Thu, 24 Aug 2023 00:37:46 +0000 x86_64 GNU/Linux
(Running Arch Linux with Zen Kernel)
- SDK version, e.g. for Linux:
Python 3.11.5
GNU Make 4.4.1
Built for x86_64-pc-linux-gnu
g++ (GCC) 13.2.1 20230801
Failure Information (for bugs)
When llama.cpp is compiled using LLAMA_CLBLAST=ON
option, it doesn't handle long prompts (longer than 114-116 characters).
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
- Compile using
make LLAMA_CLBLAST=ON
- Run on any llama GGUF model or use the same as above ( https://huggingface.co/TheBloke/WizardLM-1.0-Uncensored-CodeLlama-34B-GGUF )
- Use the above options
- Use a 120 character prompt
Failure Logs
GGML_ASSERT: ggml.c:11270: ne02 == ne12
Aborted (core dumped)
Environment info:
llama.cpp$ git log | head -1
commit 8781013ef654270cbead3e0011e33a6d690fb168