Сколько памяти занимает varchar
The storage requirements for table data on disk depend on several factors. Different storage engines represent data types and store raw data differently. Table data might be compressed, either for a column or an entire row, complicating the calculation of storage requirements for a table or column.
Despite differences in storage layout on disk, the internal MySQL APIs that communicate and exchange information about table rows use a consistent data structure that applies across all storage engines.
This section includes guidelines and information for the storage requirements for each data type supported by MySQL, including the internal format and size for storage engines that use a fixed-size representation for data types. Information is listed by category or storage engine.
The internal representation of a table has a maximum row size of 65,535 bytes, even if the storage engine is capable of supporting larger rows. This figure excludes BLOB or TEXT columns, which contribute only 9 to 12 bytes toward this size. For BLOB and TEXT data, the information is stored internally in a different area of memory than the row buffer. Different storage engines handle the allocation and storage of this data in different ways, according to the method they use for handling the corresponding types. For more information, see Chapter 16, Alternative Storage Engines, and Section 8.4.7, “Limits on Table Column Count and Row Size”.
1 ответ 1
На самом деле вы затронули не очень простой и не самый короткий вопрос. Мой ответ касается строго PostgreSQL , с ссылками на его исходный код и детали реализации (потому что могу).
Цифра в скобках varchar - это только ограничение. Не влияет совершенно ни на что, кроме получения ошибки за попытку записать что-то превышающее этот лимит. И сразу важнейшая оговорка: эта цифра - длина в символах, не в байтах. Вероятно читателю уже понятно, что формат хранения не может зависеть от этой цифры. Не дело это, когда для разных символов одной длины нам надо хранить различающийся в 4 раза объём данных (бывают более длинные кодировки? может быть, но я не знаю. Юникод до 4 байт на символ пока ещё влезает).
Для хранения текстов базово действует правило:
The storage requirement for a short string (up to 126 bytes) is 1 byte plus the actual string, which includes the space padding in the case of character. Longer strings have 4 bytes of overhead instead of 1. Long strings are compressed by the system automatically, so the physical requirement on disk might be less. Very long values are also stored in background tables so that they do not interfere with rapid access to shorter column values. In any case, the longest possible character string that can be stored is about 1 GB.
- в общих чертах используется стиль хранения данных с записью длины строки в байтах в начале записи
- строки короче 126 байт кодируются в виде структуры varattrib_1b: 1 байт для хранения длины строки (в байтах), но из которых 1 бит зарезервирован как маркер того, что это 1-байтовый формат хранения.
- если в 126 байт строка не влезла - то используем длинную форму varattrib_4b: здесь уже используется 4 байта на заголовок, что позволяет хранить существенно более длинные строки. Опять же некоторое биты зарезервированы, но до 1гб данных сохранить возможно.
Однако особые приключения начинаются позже. PostgreSQL манипулирует данными только фиксированными блоками по 8кб (обычно, настройка времени компиляции СУБД). Как 1гб данных записать в таком случае? На помощь приходит огромный фокус ушами (а у слона уши-то большие, так что всё с нами ясно, с таким-то логотипом): длинные строки нарезаются на части и отправляются в отдельную TOAST таблицу. И здесь используется другая форма заголовка, varattrib_1b_e . Где вместо данных хранится идентификатор по которому нужные данные можно прочитать из TOAST .
Плюс к тому, длинные строки могут сжиматься самой базой. Длинной считаются 1/4 блока (2кб то есть). Могут сжиматься после перемещения в TOAST , а могут - до. То есть прямо на месте сжали и сохранили, если сжатые данные влезли в 2кб. (через alter table в некоторых пределах можно этой логикой управлять)
И так мы приходим ко второй форме уже знакомого varattrib_4b - Compressed-in-line формат. Теперь после уже известных 4 байт заголовка с длиной данных на диске ещё хранится в других 4 байтах длина несжатого текста.
И весь фокус в том, что всё это многообразие никак не зависит от объявления таблицы. Формат используется тот, который подходит для нужных данных.
В общем я старательно запутал с internals кухней. Так что отдельно и по конкретным вопросам:
Что происходит, если я вставляю, строку, которая, скажем, содержит лишь 20 символов?
20 символов меньше порога в 126 символов. Поэтому база сможет использовать короткую форму с заголовком в 1 байт. Следовательно строка займёт. От 21 до 81 байта. Смотря какую кодировку ваша база использует и какие это символы. Например,
Строка "привет" в UTF8 занимает 12 байт, поэтому pg_column_size насчитает 13 байт итог.
Будет ли память для остальных 108 символов зарезервирована СУБД?
Нет, не будет. Если говорить именно о varchar, а не char. В этом и заключается их различие. Притом именно для PostgreSQL никаких бенефитов от такого поведения char нет. Это просто требование стандарта.
я могу создавать столбцы varchar(1024), вставлять туда по одному символу для каждой строки и по памяти никакой раницы с varchar(1) не будет?
Да, верно. Число - это ограничение на данные. Формат хранения от него не зависит.
И, главный вопрос, насколько важно иметь тип минимального размера в целях рационального использования памяти (то есть не допускать явного излишества при создании схемы БД)?
В моей практике бывала ситуация, когда по ошибке в поле вместо 10 символов прилетает 10 мегабайт непойми чего от приложения. По ошибке в момент записи место ошибки в приложении искать банально удобнее.
Про NULL отдельно
Все NULL в PostgreSQL независимо от типа поля хранятся идентично - в битовой маске после заголовка tuple (одной строки таблицы). Использование памяти: 1 байт на каждые 8 полей которые могут быть NULL. (с округлением вверх, конечно)
При вставке NULL проставляется 1 бит соответствующий этому полю в битовой маске и затем данные этого поля не пишутся вовсе никак. Поле NULL, и одного бита в битовой маске более чем достаточно, нет нужды хранить что-то ещё.
If I have a VARCHAR of 200 characters and that I put a string of 100 characters, will it use 200 bytes or it will just use the actual size of the string?
String Type Storage Requirements
In the following table, M represents the declared column length in characters for nonbinary string types and bytes for binary string types. L represents the actual length in bytes of a given string value.
Data Type | Storage Required |
---|---|
CHAR( M ) | The compact family of InnoDB row formats optimize storage for variable-length character sets. See COMPACT Row Format Storage Characteristics. Otherwise, M × w bytes, M w is the number of bytes required for the maximum-length character in the character set. |
BINARY( M ) | M bytes, 0 M |
VARCHAR( M ) , VARBINARY( M ) | L + 1 bytes if column values require 0 − 255 bytes, L + 2 bytes if values may require more than 255 bytes |
TINYBLOB , TINYTEXT | L + 1 bytes, where L < 2 8 |
BLOB , TEXT | L + 2 bytes, where L < 2 16 |
MEDIUMBLOB , MEDIUMTEXT | L + 3 bytes, where L < 2 24 |
LONGBLOB , LONGTEXT | L + 4 bytes, where L < 2 32 |
ENUM(' value1 ',' value2 '. ) | 1 or 2 bytes, depending on the number of enumeration values (65,535 values maximum) |
SET(' value1 ',' value2 '. ) | 1, 2, 3, 4, or 8 bytes, depending on the number of set members (64 members maximum) |
Variable-length string types are stored using a length prefix plus data. The length prefix requires from one to four bytes depending on the data type, and the value of the prefix is L (the byte length of the string). For example, storage for a MEDIUMTEXT value requires L bytes to store the value plus three bytes to store the length of the value.
To calculate the number of bytes used to store a particular CHAR , VARCHAR , or TEXT column value, you must take into account the character set used for that column and whether the value contains multibyte characters. In particular, when using a utf8 Unicode character set, you must keep in mind that not all characters use the same number of bytes. utf8mb3 and utf8mb4 character sets can require up to three and four bytes per character, respectively. For a breakdown of the storage used for different categories of utf8mb3 or utf8mb4 characters, see Section 10.9, “Unicode Support”.
VARCHAR , VARBINARY , and the BLOB and TEXT types are variable-length types. For each, the storage requirements depend on these factors:
The actual length of the column value
The column's maximum possible length
The character set used for the column, because some character sets contain multibyte characters
For example, a VARCHAR(255) column can hold a string with a maximum length of 255 characters. Assuming that the column uses the latin1 character set (one byte per character), the actual storage required is the length of the string ( L ), plus one byte to record the length of the string. For the string 'abcd' , L is 4 and the storage requirement is five bytes. If the same column is instead declared to use the ucs2 double-byte character set, the storage requirement is 10 bytes: The length of 'abcd' is eight bytes and the column requires two bytes to store lengths because the maximum length is greater than 255 (up to 510 bytes).
The effective maximum number of bytes that can be stored in a VARCHAR or VARBINARY column is subject to the maximum row size of 65,535 bytes, which is shared among all columns. For a VARCHAR column that stores multibyte characters, the effective maximum number of characters is less. For example, utf8mb4 characters can require up to four bytes per character, so a VARCHAR column that uses the utf8mb4 character set can be declared to be a maximum of 16,383 characters. See Section 8.4.7, “Limits on Table Column Count and Row Size”.
InnoDB encodes fixed-length fields greater than or equal to 768 bytes in length as variable-length fields, which can be stored off-page. For example, a CHAR(255) column can exceed 768 bytes if the maximum byte length of the character set is greater than 3, as it is with utf8mb4 .
The NDB storage engine supports variable-width columns. This means that a VARCHAR column in an NDB Cluster table requires the same amount of storage as would any other storage engine, with the exception that such values are 4-byte aligned. Thus, the string 'abcd' stored in a VARCHAR(50) column using the latin1 character set requires 8 bytes (rather than 5 bytes for the same column value in a MyISAM table).
TEXT , BLOB , and JSON columns are implemented differently in the NDB storage engine, wherein each row in the column is made up of two separate parts. One of these is of fixed size (256 bytes for TEXT and BLOB , 4000 bytes for JSON ), and is actually stored in the original table. The other consists of any data in excess of 256 bytes, which is stored in a hidden blob parts table. The size of the rows in this second table are determined by the exact type of the column, as shown in the following table:
Type | Blob Part Size |
---|---|
BLOB , TEXT | 2000 |
MEDIUMBLOB , MEDIUMTEXT | 4000 |
LONGBLOB , LONGTEXT | 13948 |
JSON | 8100 |
This means that the size of a TEXT column is 256 if size size represents the size of the row); otherwise, the size is 256 + size + (2000 × ( size − 256) % 2000).
No blob parts are stored separately by NDB for TINYBLOB or TINYTEXT column values.
You can increase the size of an NDB blob column's blob part to the maximum of 13948 using NDB_COLUMN in a column comment when creating or altering the parent table. In NDB 8.0.30 and later, it is also possible to set the inline size for a TEXT , BLOB , or JSON column, using NDB_TABLE in a column comment. See NDB_COLUMN Options, for more information.
The size of an ENUM object is determined by the number of different enumeration values. One byte is used for enumerations with up to 255 possible values. Two bytes are used for enumerations having between 256 and 65,535 possible values. See Section 11.3.5, “The ENUM Type”.
JSON Storage Requirements
In general, the storage requirement for a JSON column is approximately the same as for a LONGBLOB or LONGTEXT column; that is, the space consumed by a JSON document is roughly the same as it would be for the document's string representation stored in a column of one of these types. However, there is an overhead imposed by the binary encoding, including metadata and dictionaries needed for lookup, of the individual values stored in the JSON document. For example, a string stored in a JSON document requires 4 to 10 bytes additional storage, depending on the length of the string and the size of the object or array in which it is stored.
In addition, MySQL imposes a limit on the size of any JSON document stored in a JSON column such that it cannot be any larger than the value of max_allowed_packet .
Каким образом резервируется память для типа varchar? Например, есть таблица, в которой столбец обьявлен как varchar(128). Что происходит, если я вставляю, строку, которая, скажем, содержит лишь 20 символов? Будет ли память для остальных 108 символов зарезервирована СУБД? Также интересно поведение при вставке null. И, главный вопрос, насколько важно иметь тип минимального размера в целях рационального использования памяти (то есть не допускать явного излишества при создании схемы БД)?
Будет ли память для остальных 108 символов зарезервирована СУБД? Нет. интересно поведение при вставке null. Как правило - отдельный байт. насколько важно иметь тип минимального размера в целях рационального использования памяти В пределах 2^(8n)-1 - пофиг.
@Akina то есть, я могу создавать столбцы varchar(1024), вставлять туда по одному символу для каждой строки и по памяти никакой раницы с varchar(1) не будет?
Нет разницы между VARCHAR(1) и VARCHAR(255). Нет разницы между VARCHAR(256) и VARCHAR(65535). А вот между VARCHAR(255) и VARCHAR(256) разница есть, целый байт.
@Akina, это вы про mysql. В postgresql нет лимита в 65535 байт и потому нет двухбайтовой записи. Но и для mysql, к слову, важна оговорка про кодировку. Потому что длина строки в байтах и именно на строки до максимум 255 байт берётся один байт под хранение длины строки, а varchar(что-то) - в символах.
8 Answers 8
Keep in mind that MySQL has a maximum row size limit
The internal representation of a MySQL table has a maximum row size limit of 65,535 bytes, not counting BLOB and TEXT types. BLOB and TEXT columns only contribute 9 to 12 bytes toward the row size limit because their contents are stored separately from the rest of the row. Read more about Limits on Table Column Count and Row Size.
Maximum size a single column can occupy, is different before and after MySQL 5.0.3
Values in VARCHAR columns are variable-length strings. The length can be specified as a value from 0 to 255 before MySQL 5.0.3, and 0 to 65,535 in 5.0.3 and later versions. The effective maximum length of a VARCHAR in MySQL 5.0.3 and later is subject to the maximum row size (65,535 bytes, which is shared among all columns) and the character set used.
However, note that the limit is lower if you use a multi-byte character set like utf8 or utf8mb4.
Use TEXT types inorder to overcome row size limit.
The four TEXT types are TINYTEXT, TEXT, MEDIUMTEXT, and LONGTEXT. These correspond to the four BLOB types and have the same maximum lengths and storage requirements.
More details on BLOB and TEXT Types
Even more
Checkout more details on Data Type Storage Requirements which deals with storage requirements for all data types.
I try to avoid TEXT columns though as they can cause temp tables to be created when present and sorting
If i am take varchar(200) for first name and i store only 6 char in this field then how many byte first name will be occupy?
@PareshGami - 6+1=7 characters! In contrast to CHAR, VARCHAR values are stored as a 1-byte or 2-byte length prefix plus data. more.
As per the online docs, there is a 64K row limit and you can work out the row size by using:
You need to keep in mind that the column lengths aren't a one-to-one mapping of their size. For example, CHAR(10) CHARACTER SET utf8 requires three bytes for each of the ten characters since that particular encoding has to account for the three-bytes-per-character property of utf8 (that's MySQL's utf8 encoding rather than "real" UTF-8, which can have up to four bytes).
But, if your row size is approaching 64K, you may want to examine the schema of your database. It's a rare table that needs to be that wide in a properly set up (3NF) database - it's possible, just not very common.
If you want to use more than that, you can use the BLOB or TEXT types. These do not count against the 64K limit of the row (other than a small administrative footprint) but you need to be aware of other problems that come from their use, such as not being able to sort using the entire text block beyond a certain number of characters (though this can be configured upwards), forcing temporary tables to be on disk rather than in memory, or having to configure client and server comms buffers to handle the sizes efficiently.
You still have the byte/character mismatch (so that a MEDIUMTEXT utf8 column can store "only" about half a million characters, (16M-1)/3 = 5,592,405 ) but it still greatly expands your range.
The CHAR and VARCHAR types are similar, but differ in the way they are stored and retrieved. They also differ in maximum length and in whether trailing spaces are retained.
The CHAR and VARCHAR types are declared with a length that indicates the maximum number of characters you want to store. For example, CHAR(30) can hold up to 30 characters.
The length of a CHAR column is fixed to the length that you declare when you create the table. The length can be any value from 0 to 255. When CHAR values are stored, they are right-padded with spaces to the specified length. When CHAR values are retrieved, trailing spaces are removed unless the PAD_CHAR_TO_FULL_LENGTH SQL mode is enabled.
Values in VARCHAR columns are variable-length strings. The length can be specified as a value from 0 to 65,535. The effective maximum length of a VARCHAR is subject to the maximum row size (65,535 bytes, which is shared among all columns) and the character set used. See Section 8.4.7, “Limits on Table Column Count and Row Size”.
In contrast to CHAR , VARCHAR values are stored as a 1-byte or 2-byte length prefix plus data. The length prefix indicates the number of bytes in the value. A column uses one length byte if values require no more than 255 bytes, two length bytes if values may require more than 255 bytes.
If strict SQL mode is not enabled and you assign a value to a CHAR or VARCHAR column that exceeds the column's maximum length, the value is truncated to fit and a warning is generated. For truncation of nonspace characters, you can cause an error to occur (rather than a warning) and suppress insertion of the value by using strict SQL mode. See Section 5.1.10, “Server SQL Modes”.
For VARCHAR columns, trailing spaces in excess of the column length are truncated prior to insertion and a warning is generated, regardless of the SQL mode in use. For CHAR columns, truncation of excess trailing spaces from inserted values is performed silently regardless of the SQL mode.
VARCHAR values are not padded when they are stored. Trailing spaces are retained when values are stored and retrieved, in conformance with standard SQL.
The following table illustrates the differences between CHAR and VARCHAR by showing the result of storing various string values into CHAR(4) and VARCHAR(4) columns (assuming that the column uses a single-byte character set such as latin1 ).
Value | CHAR(4) | Storage Required | VARCHAR(4) | Storage Required |
---|---|---|---|---|
'' | ' ' | 4 bytes | '' | 1 byte |
'ab' | 'ab ' | 4 bytes | 'ab' | 3 bytes |
'abcd' | 'abcd' | 4 bytes | 'abcd' | 5 bytes |
'abcdefgh' | 'abcd' | 4 bytes | 'abcd' | 5 bytes |
The values shown as stored in the last row of the table apply only when not using strict SQL mode ; if strict mode is enabled, values that exceed the column length are not stored , and an error results.
InnoDB encodes fixed-length fields greater than or equal to 768 bytes in length as variable-length fields, which can be stored off-page. For example, a CHAR(255) column can exceed 768 bytes if the maximum byte length of the character set is greater than 3, as it is with utf8mb4 .
If a given value is stored into the CHAR(4) and VARCHAR(4) columns, the values retrieved from the columns are not always the same because trailing spaces are removed from CHAR columns upon retrieval. The following example illustrates this difference:
Values in CHAR , VARCHAR , and TEXT columns are sorted and compared according to the character set collation assigned to the column.
All MySQL collations are of type PAD SPACE . This means that all CHAR , VARCHAR , and TEXT values are compared without regard to any trailing spaces. “ Comparison ” in this context does not include the LIKE pattern-matching operator, for which trailing spaces are significant. For example:
This is not affected by the server SQL mode.
For more information about MySQL character sets and collations, see Chapter 10, Character Sets, Collations, Unicode. For additional information about storage requirements, see Section 11.7, “Data Type Storage Requirements”.
For those cases where trailing pad characters are stripped or comparisons ignore them, if a column has an index that requires unique values, inserting into the column values that differ only in number of trailing pad characters results in a duplicate-key error. For example, if a table contains 'a' , an attempt to store 'a ' causes a duplicate-key error.
Spatial Type Storage Requirements
MySQL stores geometry values using 4 bytes to indicate the SRID followed by the WKB representation of the value. The LENGTH() function returns the space in bytes required for value storage.
For descriptions of WKB and internal storage formats for spatial values, see Section 11.4.3, “Supported Spatial Data Formats”.
Not the answer you're looking for? Browse other questions tagged mysql varchar or ask your own question.
Linked
Related
Hot Network Questions
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Site design / logo © 2022 Stack Exchange Inc; user contributions licensed under cc by-sa. rev 2022.5.10.42086
I would like to know what the max size is for a MySQL VARCHAR type.
I read that the max size is limited by the row size which is about 65k. I tried setting the field to varchar(20000) but it says that that's too large.
I could set it to varchar(10000) . What is the exact max I can set it to?
Date and Time Type Storage Requirements
For TIME , DATETIME , and TIMESTAMP columns, the storage required for tables created before MySQL 5.6.4 differs from tables created from 5.6.4 on. This is due to a change in 5.6.4 that permits these types to have a fractional part, which requires from 0 to 3 bytes.
Data Type | Storage Required Before MySQL 5.6.4 | Storage Required as of MySQL 5.6.4 |
---|---|---|
YEAR | 1 byte | 1 byte |
DATE | 3 bytes | 3 bytes |
TIME | 3 bytes | 3 bytes + fractional seconds storage |
DATETIME | 8 bytes | 5 bytes + fractional seconds storage |
TIMESTAMP | 4 bytes | 4 bytes + fractional seconds storage |
As of MySQL 5.6.4, storage for YEAR and DATE remains unchanged. However, TIME , DATETIME , and TIMESTAMP are represented differently. DATETIME is packed more efficiently, requiring 5 rather than 8 bytes for the nonfractional part, and all three parts have a fractional part that requires from 0 to 3 bytes, depending on the fractional seconds precision of stored values.
Fractional Seconds Precision | Storage Required |
---|---|
0 | 0 bytes |
1, 2 | 1 byte |
3, 4 | 2 bytes |
5, 6 | 3 bytes |
For example, TIME(0) , TIME(2) , TIME(4) , and TIME(6) use 3, 4, 5, and 6 bytes, respectively. TIME and TIME(0) are equivalent and require the same storage.
For details about internal representation of temporal values, see MySQL Internals: Important Algorithms and Structures.
NDB Table Storage Requirements
NDB tables use 4-byte alignment ; all NDB data storage is done in multiples of 4 bytes. Thus, a column value that would typically take 15 bytes requires 16 bytes in an NDB table. For example, in NDB tables, the TINYINT , SMALLINT , MEDIUMINT , and INTEGER ( INT ) column types each require 4 bytes storage per record due to the alignment factor.
Each BIT( M ) column takes M bits of storage space. Although an individual BIT column is not 4-byte aligned, NDB reserves 4 bytes (32 bits) per row for the first 1-32 bits needed for BIT columns, then another 4 bytes for bits 33-64, and so on.
While a NULL itself does not require any storage space, NDB reserves 4 bytes per row if the table definition contains any columns allowing NULL , up to 32 NULL columns. (If an NDB Cluster table is defined with more than 32 NULL columns up to 64 NULL columns, then 8 bytes per row are reserved.)
Every table using the NDB storage engine requires a primary key; if you do not define a primary key, a “ hidden ” primary key is created by NDB . This hidden primary key consumes 31-35 bytes per table record.
You can use the ndb_size.pl Perl script to estimate NDB storage requirements. It connects to a current MySQL (not NDB Cluster) database and creates a report on how much space that database would require if it used the NDB storage engine. See Section 23.5.28, “ndb_size.pl — NDBCLUSTER Size Requirement Estimator” for more information.
InnoDB Table Storage Requirements
See Section 15.10, “InnoDB Row Formats” for information about storage requirements for InnoDB tables.
3 Answers 3
This is the var (variable) in varchar : you only store what you enter (and an extra 2 bytes to store length upto 65535)
If it was char(200) then you'd always store 200 characters, padded with 100 spaces
To be clear: Storing a string 100 characters in a varchar(200) field will take 101 bytes. Storing a string of 100 characters in a varchar(256) field will take 102 bytes. This is why you see varchar(255) so frequently; 255 characters is the longest string you can store in MySQL's varchar type with only one byte of overhead. Anything larger requires two bytes of overhead.
@mpen I'm not sure, but that's a great question! If you track down the answer, please report back here! :)
@rinogo The official MySQL docs are fuzzy on this subject but I'm pretty sure in varchar(N) N is the number of characters, so varchar(255) charset utf8mb4 would actually use up to 1021 bytes. I'm not sure if it will always use the full number of bytes or what; I guess it depends how it's packed.
VARCHAR means that it's a variable-length character, so it's only going to take as much space as is necessary. But if you knew something about the underlying structure, it may make sense to restrict VARCHAR to some maximum amount.
For instance, if you were storing comments from the user, you may limit the comment field to only 4000 characters; if so, it doesn't really make any sense to make the sql table have a field that's larger than VARCHAR(4000).
Actually, it will takes 101 bytes.
InnoDB Table Storage Requirements
See Section 15.10, “InnoDB Row Formats” for information about storage requirements for InnoDB tables.
Numeric Type Storage Requirements
Values for DECIMAL (and NUMERIC ) columns are represented using a binary format that packs nine decimal (base 10) digits into four bytes. Storage for the integer and fractional parts of each value are determined separately. Each multiple of nine digits requires four bytes, and the “ leftover ” digits require some fraction of four bytes. The storage required for excess digits is given by the following table.
Leftover Digits | Number of Bytes |
---|---|
0 | 0 |
1 | 1 |
2 | 1 |
3 | 2 |
4 | 2 |
5 | 3 |
6 | 3 |
7 | 4 |
8 | 4 |
Читайте также: