Forums » Linux

GeForce 2 better than GeForce FX 5200?

Oct 15, 2004 Klox link
I just ended up with a GeForce FX 5200 128MB (it was free) and thought I'd see how it compares the GeForce 2Ti I've had for quite a while. So I swapped hardware without changing any drivers/settings/etc. Out side of a station, I got ~10fps with the FX5200 and ~60fps with the GeForce2. I know the FX5200 is a low-end card, so is there some setting that I have turned on that the GeForce2 can handle and the FX5200 can't?

I don't care which card I use. I'm just curious. If you want, I'm happy to experiment. I also have a GeForce FX 5900XT I could try.

Fedora Core 2
latest nVidia drivers

some of config.ini:
[refgl]
illummap=1
envmap=1
bumpmap=1
tc=0
minfilter=9987
maxfilter=9729
specular=1
windowmode=0
textureresolution=0
texturequality=32
gamma=8
def_freq=0
tfactor_hack=0
doshaders=1
dovertexbuffers=1
doindexbuffers=1
do_compiled_vertex_array=1
do_env_combine=1
do_extensions=1

[Vendetta]
Username=klox
VideoDriver=OpenGL Reference GKGL driver
AudioDriver=Open Sound System driver
xres=1024
yres=768
bpp=24

openglinfo.log:
[Fri Oct 15 19:26:08 2004]
Vendor: NVIDIA Corporation
Renderer: GeForce2 GTS/AGP/SSE/3DNOW!
Version: 1.5.1 NVIDIA 61.11
Extensions: GL_ARB_imaging GL_ARB_multitexture GL_ARB_point_parameters GL_ARB_point_sprite GL_ARB_shader_objects GL_ARB_shading_language_100 GL_ARB_texture_compression GL_ARB_texture_cube_map GL_ARB_texture_env_add GL_ARB_texture_env_combine GL_ARB_texture_env_dot3 GL_ARB_texture_mirrored_repeat GL_ARB_transpose_matrix GL_ARB_vertex_buffer_object GL_ARB_vertex_program GL_ARB_vertex_shader GL_ARB_window_pos GL_S3_s3tc GL_EXT_texture_env_add GL_EXT_abgr GL_EXT_bgra GL_EXT_blend_color GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_clip_volume_hint GL_EXT_compiled_vertex_array GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_multi_draw_arrays GL_EXT_packed_pixels GL_EXT_paletted_texture GL_EXT_pixel_buffer_object GL_EXT_point_parameters GL_EXT_rescale_normal GL_EXT_secondary_color GL_EXT_separate_specular_color GL_EXT_shared_texture_palette GL_EXT_stencil_wrap GL_EXT_texture_compression_s3tc GL_EXT_texture_cube_map GL_EXT_texture_edge_clamp GL_EXT_texture_env_combine GL_EXT_texture_env_dot3 GL_EXT_texture_filter_anisotropic GL_EXT_texture_lod GL_EXT_texture_lod_bias GL_EXT_texture_object GL_EXT_vertex_array GL_IBM_rasterpos_clip GL_IBM_texture_mirrored_repeat GL_KTX_buffer_region GL_NV_blend_square GL_NV_fence GL_NV_fog_distance GL_NV_light_max_exponent GL_NV_packed_depth_stencil GL_NV_pixel_data_range GL_NV_point_sprite GL_NV_register_combiners GL_NV_texgen_reflection GL_NV_texture_env_combine4 GL_NV_texture_rectangle GL_NV_vertex_array_range GL_NV_vertex_array_range2 GL_NV_vertex_program GL_NV_vertex_program1_1 GL_SGIS_generate_mipmap GL_SGIS_multitexture GL_SGIS_texture_lod GL_SUN_slice_accum
GLU Version: 1.3
GLU Extensions: GLU_EXT_nurbs_tessellator GLU_EXT_object_space_tess
glx Extensions: GLX_EXT_visual_info GLX_EXT_visual_rating GLX_SGIX_fbconfig GLX_SGIX_pbuffer GLX_SGI_video_sync GLX_SGI_swap_control GLX_ARB_get_proc_address
glx Version: 1.3
glx server Vendor: NVIDIA Corporation
glx server Version: 1.3
glx server Extensions: GLX_EXT_visual_info GLX_EXT_visual_rating GLX_SGIX_fbconfig GLX_SGIX_pbuffer GLX_SGI_video_sync GLX_SGI_swap_control
glx client Vendor: NVIDIA Corporation
glx client Version: 1.3
glx client Extensions: GLX_ARB_get_proc_address GLX_ARB_multisample GLX_EXT_visual_info GLX_EXT_visual_rating GLX_EXT_import_context GLX_SGI_video_sync GLX_NV_swap_group GLX_SGIX_fbconfig GLX_SGIX_pbuffer GLX_SGI_swap_control GLX_NV_float_buffer
GL_MAX_LIGHTS: 8
GL_MAX_CLIP_PLANES: 6
GL_MAX_MODELVIEW_STACK_DEPTH: 32
GL_MAX_PROJECTION_STACK_DEPTH: 4
GL_MAX_TEXTURE_STACK_DEPTH: 10
GL_SUBPIXEL_BITS: 4
GL_MAX_TEXTURE_SIZE: 2048
GL_MAX_PIXEL_MAP_TABLE: 65536
GL_MAX_NAME_STACK_DEPTH: 128
GL_MAX_LIST_NESTING: 64
GL_MAX_EVAL_ORDER: 8
GL_MAX_VIEWPORT_DIMS: 4096
GL_MAX_ATTRIB_STACK_DEPTH: 16
GL_AUX_BUFFERS: 0
GL_RGBA_MODE: 1
GL_INDEX_MODE: 0
GL_DOUBLEBUFFER: 1
GL_STEREO: 0
GL_POINT_SIZE_RANGE: 1.000000 - 63.375000
GL_POINT_SIZE_GRANULARITY: 0.125000
GL_LINE_WIDTH_RANGE: 0.500000 - 10.000000
GL_LINE_WIDTH_GRANULARITY: 0.125000
GL_MAX_TEXTURE_UNITS: 2
GL_MAX_CUBE_MAP_TEXTURE_SIZE: 512
GL_MAX_ELEMENTS_VERTICES: 4096
GL_MAX_ELEMENTS_INDICES: 4096
GL_POINT_SIZE_MIN: 0.000000
GL_POINT_SIZE_MAX: 63.375000
GL_POINT_FADE_THRESHOLD_SIZE: 1.000000
GL_MAX_TEXTURE_MAX_ANISOTROPY: 2.000000
Oct 16, 2004 Konekobasu link
Most likely "doshaders=1" is the cause for the slowdowns on the FX 5200.
Try "doshaders=0". While the GF 5200 does support shaders (which the GF2 does not), it supports them really badly, i.e. shader performance sucks on the GF 5200.
Oct 19, 2004 thurisaz link
I have an FX 5200, and have gotten nothing but fabulous performance on a P4 2GHz.. one thing to keep in mind, is that I believe this card will co-opt system memory for video RAM, rather than having the chips on the card itself...

You should find a "video aperture" setting in your BIOS that will allow you to set the amount of system RAM to use in this way. My first guess is that maybe you have left this at a low or default setting.
Oct 19, 2004 Froste link
the 5200 has only 4 pipelines with a core and 128bit-memory at 200mhz, up to 5700 it still only has 4 pipelines, but faster memory and core. The 5800 doubles the pipelines to 8, and the 5900 also doubles the memory bus to 256bit, with a core at 400mhz and memory at 425mhz. The 6800 base card has 12 pipelines, and the GT / Ultra has 16, with core and memory at 400/550mhz respectively.

Obviously a 6800 is the drool object, but lacking funds, i'd say get a 5900, if nothing else, you'll be able to play doom3 (:
Oct 19, 2004 roguelazer link
Get a 6600GT. The performance of a 6800 (non-GT), with the power consumption of a 5600 and the cost of a 5200. ($199). Available now for PCI Express, and soon for AGP 8x.

GeForce FX 5200 ~= GeForce4 MX 460
GeForce FX 5600 ~= GeForce4 Ti 4200
GeForce FX 5700 ~= GeForce4 Ti 4800
GeForce FX 5800/5900 > GeForce4
GeForce 6[6|8]00 > GeForce FX
GeForce 6200 = No freaking idea
Oct 23, 2004 MonkRX link
Thats a really rough representation of nvidia cards.

Id prefer to split them up by market segments... and basically claim that the newer generation is better than the old. But that would still be an off-representation.
Oct 24, 2004 Froste link
The real problem with nvidia cards is that most variants are useless. If it was a simple matter of adding something here, adding something there, then at least the cards would show a tangible sequential improvement, but instead, they add a pipe but slow down the memory, or they add memory but lessen the memory bus, or they clock up the core but remove a pipe, until you're left with a mess that's not only hard to remember all the variants, but you also their effective capability, since they excel at different computations or effects.

In short, they whore themselves. Luxury callgirl with an empty head, or smart street skank in a twist.
Oct 30, 2004 Demonen link
LOL Froste

Care to elaborate on that last bit?
Oct 30, 2004 Froste link
Well, no :p you'll have to use your imagination (: